// , βIt is not so important to be serious as it is to be serious about the important things. The monkey wears an expression of seriousness... but the monkey is serious because he itches."(No/No)
It's difficult to make a way of intentional discriminating or ranking posts by quality, without also creating a side effect of un-intentional discrimination based on mere popularity or whether people happen to agree with the conclusion of a given sentiment in a circle...hug.
Many well intentioned efforts at making tools for quality control become, rather, control of mere agreement, and can devolve into "groupthink" and a "Mutual Admiration Society," a pompous, self-congratulatory discussion where little to no material progress is made, as in the well documented cases of social media and the continued decline of the Humanities in the public university system.
Lesswrong.org, overcomingbias.com, thinkspot, g0v, the Heterodox foundation, and OpenBazaar have done some research on ways to deal with this, but it's a pernicious problem.
There is kind of time-proven solution to create good quality feed - karma-system. But the problem with this system is that it tends to create quite unwelcoming communities. Examples: hacker news, stack overflow. I wonder if it's possible to create karma-system without hostile community
http://perpetual.education is a design/programming school. We like to be part of the discussion over here at Dev.to / We have time-slots for free conversations for career advice IRL : )
StackOverflow is misunderstood. It has clear rules. If you don't know them... it's a problem. "Why my code didn't work" - isn't' an answerable question.
That's completely different than an article of "value" or not...
That's their messaging problem / and not really the karma system.
Stackoverflow is misunderstood, yes but I'm saying this as someone that's used it for a pretty long time, some people just get all defensive or rude to people that are just trying to learn. The "elitist" mindset is very common in a lot of those communities.
For example, if someone has clearly violated the rules, you don't have to be rude, you can just ask them to take a look at the rules instead and adjust the post accordingly or flag it for review.
http://perpetual.education is a design/programming school. We like to be part of the discussion over here at Dev.to / We have time-slots for free conversations for career advice IRL : )
// , βIt is not so important to be serious as it is to be serious about the important things. The monkey wears an expression of seriousness... but the monkey is serious because he itches."(No/No)
I was making a nod to Reddit when I mentioned the circle-hug. Karma systems make echo chambers where anyone who disagrees is presumed to be acting in bad faith.
Let us hope that dev.to does not follow too far down the path of reddit.com, in that regard, at least.
I get what you mean, but don't you think that granting the user the necessary configuration options to personalize their experience is fundamentally sound? I too think that attempting to discriminate or adding features that can introduce segregation will end up being the wrong path.
I think a combination of the community encouraging authors to become better with personalized feeds might make a difference, but as you say it's not feasible at a large scale to not have unintentional side-effects. Regardless, there is a big chunk of the problem that can be solved through moderation efforts I feel.
http://perpetual.education is a design/programming school. We like to be part of the discussion over here at Dev.to / We have time-slots for free conversations for career advice IRL : )
If there was a way to really highlight some fantastic articles - that could serve as an example. Maybe in each 'tag', there's a way to identify highly valuable articles - without being rude. But "hearts" aren't going to do it - because no one will see 99% of things (is what it seems like)
// , βIt is not so important to be serious as it is to be serious about the important things. The monkey wears an expression of seriousness... but the monkey is serious because he itches."(No/No)
can introduce segregation will end up being the wrong path.
I think this is sort of what I was getting at with the echo chamber effect so commonly attributed to karma or "winning internet points" types of systems.
Rather than rating an idea based on the quality of its presentation and effort, people simply decide whether they disagree or agree with it.
I think this inability to discriminate between someone who's playing the game well & by the rules and whether they're someone playing for your team is similar to the stuff I read sometimes from self-appointed moral crusaders, with gotcha games like, "Oh, you disagree with this anti-poverty idea, you must be a fat-cat capitalist." Or "I think ______ and anyone who disagrees with me is automatically a secret -ist or (-phobe) acting in bad faith!" or "You used the wrong word!" or "But of course the communists will never agree with this."
Basically, the karma system only works if high quality consensus building ideas are the most likely to be selected for, rather than ideas with which people agree. And quality control mechanisms that rely on simple up or down votes or "I didn't like this" reports end up getting gamed to suppress ideas with which the user of that QC tool disagrees.
What little I've studied of Ricardian Contracts, Markov Chains, Sybil Attacks, and all the other obtusely named concepts in the game theory of adversarial systems design doesn't suggest to me that this is a problem that can be resolved without resorting to PEBKAC hypotheses.
But some of those hypotheses have been developed, and there's hope. New research has uncovered that morality, as used today, has little to do with higher order cognitive processes. It may not sound like good news, but a biological proclivity that is known is a lot easier to deal with than a cultural issue whose cause is unknown.
It's difficult to make a way of intentional discriminating or ranking posts by quality, without also creating a side effect of un-intentional discrimination based on mere popularity or whether people happen to agree with the conclusion of a given sentiment in a circle...hug.
Many well intentioned efforts at making tools for quality control become, rather, control of mere agreement, and can devolve into "groupthink" and a "Mutual Admiration Society," a pompous, self-congratulatory discussion where little to no material progress is made, as in the well documented cases of social media and the continued decline of the Humanities in the public university system.
Lesswrong.org, overcomingbias.com, thinkspot, g0v, the Heterodox foundation, and OpenBazaar have done some research on ways to deal with this, but it's a pernicious problem.
There is kind of time-proven solution to create good quality feed - karma-system. But the problem with this system is that it tends to create quite unwelcoming communities. Examples: hacker news, stack overflow. I wonder if it's possible to create karma-system without hostile community
StackOverflow is misunderstood. It has clear rules. If you don't know them... it's a problem. "Why my code didn't work" - isn't' an answerable question.
That's completely different than an article of "value" or not...
That's their messaging problem / and not really the karma system.
Stackoverflow is misunderstood, yes but I'm saying this as someone that's used it for a pretty long time, some people just get all defensive or rude to people that are just trying to learn. The "elitist" mindset is very common in a lot of those communities.
For example, if someone has clearly violated the rules, you don't have to be rude, you can just ask them to take a look at the rules instead and adjust the post accordingly or flag it for review.
Yes. Some people are just jerks.
I was making a nod to Reddit when I mentioned the circle-hug. Karma systems make echo chambers where anyone who disagrees is presumed to be acting in bad faith.
Let us hope that dev.to does not follow too far down the path of reddit.com, in that regard, at least.
I get what you mean, but don't you think that granting the user the necessary configuration options to personalize their experience is fundamentally sound? I too think that attempting to discriminate or adding features that can introduce segregation will end up being the wrong path.
I think a combination of the community encouraging authors to become better with personalized feeds might make a difference, but as you say it's not feasible at a large scale to not have unintentional side-effects. Regardless, there is a big chunk of the problem that can be solved through moderation efforts I feel.
If there was a way to really highlight some fantastic articles - that could serve as an example. Maybe in each 'tag', there's a way to identify highly valuable articles - without being rude. But "hearts" aren't going to do it - because no one will see 99% of things (is what it seems like)
I think this is sort of what I was getting at with the echo chamber effect so commonly attributed to karma or "winning internet points" types of systems.
Rather than rating an idea based on the quality of its presentation and effort, people simply decide whether they disagree or agree with it.
I think this inability to discriminate between someone who's playing the game well & by the rules and whether they're someone playing for your team is similar to the stuff I read sometimes from self-appointed moral crusaders, with gotcha games like, "Oh, you disagree with this anti-poverty idea, you must be a fat-cat capitalist." Or "I think ______ and anyone who disagrees with me is automatically a secret -ist or (-phobe) acting in bad faith!" or "You used the wrong word!" or "But of course the communists will never agree with this."
Basically, the karma system only works if high quality consensus building ideas are the most likely to be selected for, rather than ideas with which people agree. And quality control mechanisms that rely on simple up or down votes or "I didn't like this" reports end up getting gamed to suppress ideas with which the user of that QC tool disagrees.
What little I've studied of Ricardian Contracts, Markov Chains, Sybil Attacks, and all the other obtusely named concepts in the game theory of adversarial systems design doesn't suggest to me that this is a problem that can be resolved without resorting to PEBKAC hypotheses.
But some of those hypotheses have been developed, and there's hope. New research has uncovered that morality, as used today, has little to do with higher order cognitive processes. It may not sound like good news, but a biological proclivity that is known is a lot easier to deal with than a cultural issue whose cause is unknown.
Initial Groundbreaking Paper: ncbi.nlm.nih.gov/pmc/articles/PMC2...
Further Work: jordanbpeterson.com/docs/230/2014/...
I don't know what g0v are doing, but it looks like it's working:
brookings.edu/blog/order-from-chao...
And some of the intergenerational conflict and external pressure Taiwan has dwarfs the issues that loom large in the mind of US Citizens.
The University was, once upon a time, a way to winnow out high quality ideas rather than those that are merely vogue or popular.
But, with the research above, there may be a way, a narrow path, for Social Media to suck less at this too.
Companies that don't take this new research seriously will be left behind by their competitors who can actually make Social Media less anti-social.