Remember awhile back I wrote that thing about communication apps and how the most important feature they can have is the Preview button? It's time for me to once again pontificate on them, because here's feature #2. It's a bit less general, since it doesn't apply as much to closed communication apps like an LMS, but it's still probably of interest to readers here, and it builds on top of the ideas in the first one.
WARNING: while I feel extremely justified in the previous article, and almost can't imagine how meaningful communication could exist without the author and the reader understanding each others' usage of the software at some level, I don't think I can make as good of a from-first-principles argument for this. You should read my academic and journalistic citations, because I might be misinterpreting them.
Gatekeeping has very different connotations to different people. To someone immersed in its study, it's a constellation of social effects and about as intrinsic to communication as language itself. From gossipy vim junkies sharing news stories in their open workspace, to a pair of parents passing the time with the weather while their kids are at the park, to the spam filter in front of your inbox, and the editor of a newspaper who has a few pages to fill and a million mail-in notes to fill them with, a society with no gatekeeping whatsoever is inconceivable. That's certainly how Karine Barzilai-Nahon sees it, and hers is the sense in which I use the term.
To someone involved in politics of any sort, that probably sounds meaninglessly broad. Gatekeeping usually carries a connotation of oppression, institutional bureaucracy, and force. Since this is an article on computer software, which literally codifies a plan for how information should be filtered, I'm going to be apologizing for gatekeeping above the individual level.
I really don't have to work very hard to do that, however. Everyone on the internet pretty much agrees that we need to filter out "spam," and good fractions of it want to filter content that is abusive, hateful, false, or manufactured. And even within acceptable range of content, people gatekeep them in practice with ranking because there's still too much.
And we're getting into the realms of opinion and judgement.
It already starts with the definition of "spam". The CAN-SPAM act's definition, Unsolicited Bulk Email, is not the one people usually act on when they try to "block spam".
Content filters, like the Baysian component in Spamassassin, aren't judging based on that definition. Whether a message is unsolicited or bulk has nothing to do with its content; that's all part of the surrounding context. Other parts of the spam filter, like header validation and IP address reputation, are basically all attempts to prevent the gated mailer from evading bans.
More than that, the whole purpose of a social network recommendation algorithm, like the one on dev.to's home page, is to present content to large numbers of potentially interested readers. If this post showed up on your home page, then dev.to allowed me to send out a message in bulk without them soliciting me for it. The same is true for the ads in the sidebar. Yet it (probably) won't be caught by any of dev.to's anti-spam measures, and that won't be considered a mistake.
Following Barzilai-Nahon's classification, my relationship to The Practical Dev would also be the same as a spammers: "Demanding User", one who has a loose relationship both as an account holder and as a member of DEV's chosen niche who has plenty of other blogging platforms to visit.
For an ad-supported site like dev.to or twitter.com, a good self-interest argument for it would be to argue that spam filters are intended to drive would-be advertisers towards their for-pay ad services instead of just posting their ads for free. If I was going to try to define spam for them, it would be communication platform freeloading. Since this think piece isn't applicable to their ad platform, then, it doesn't count as spam.
That definition is a lot less obvious on a volunteer-based site like Wikipedia and Mastodon. Wikipedia categorically bans advertising, further clarified by their standards for notability and encyclopedic tone and balance, which neatly defines what they consider spam.
But how does that make sense for a non-ad-funded Mastodon instance? Like other social networks, Mastodon will recommend posts to you based on retoots and stuff, so UBM is out as a definition. There's also plenty of self-promotion of various kinds, and people singing the biased praises of for-pay stuff that they like, so it's not like they have a blanket ban on promotion. Maybe it still has something to do with "freeloading"? But on an instance where contributing money for the right to advertise isn't an option, isn't using the instance always freeloading? There's duplication of content, but non-spam users repost things sometimes, too.
Can we please have a better definition of spam than just "content I don't like"?
Defining what counts as desirable content is bad enough, but we've been asking our social networks to also arbitrate what's true.
Like preventing spam, this isn't just something that's nice-to-have, it's something that many users will rightfully consider necessary. Allowing anonymous internet users to lie with impunity has literally cost lives, plus the more-numerous/less-severe cases of ruining jobs and reputations. The infrastructure of a real news organization does serve a purpose.
And the present day solutions, of "Codes of Conduct" that list individual cases of things that you are not allowed to say, don't go anywhere near far enough. The Comet Ping-Pong shooter was motivated by social media posts that claimed there was sexual abuse happening there. A reasonable social network probably shouldn't categorically ban anyone claiming to have suffered sexual abuse, and I don't think it's possible to figure out whether a claim of sexual abuse happened or not by just reading the contents of the accusing message.
The obvious solution is fact-checking, but that path is rightfully seen as scary and authoritarian. The fact that Facebook conflates impersonating other websites, a class of fake news that doesn't take much judgement to recognize as long as you know what to look for, with "alternative news" outlets, most of which are full of crap but require a much bigger judgement call to rule on specifically instances. Does "Natural News" count? There are reasonable arguments that it should, since crank medicine has taken lives. If so, what about people giving unqualified medical advice when they're not pretending to be a news outlet? That seems like an arbitrary line to draw, especially when journalists are likely to be less-trusted than random people off the street (the only groups with a higher number of "very low" trust rankings were Members of Congress and Telemarketers; that's worse than Lawyers).
And did I mention that "hire a bunch of people to sort through everything that gets flagged on Facebook" has serious scaling problems, and is a good way to drive someone insane?
What kind of fake news policy could we actually have other than "claims that I don't believe?"
One of the more popular accusations to throw around on Reddit is "vote brigading." It's pretty simple: when a post on your subreddit falls into the hole, accuse another, semi-related one of secretly planning to downvote it. It also happens in commercial contexts with upvote farming.
This is, of course, an unfalsifiable claim. Reddit themselves only concern themselves with on-site calls to vote a certain way, while the users have no way to deal with off-site calls to vote, or with subreddits that link to contents they disagree with and downvote them in practice without technically asking you to do so.
As with spam, the go-to definitions are in terms of outside incentives, which are impossible to directly determine from the traffic itself. A more sound Gatkeeeping Theory would be tied to whatever the groups themselves want to get out of the site.
Again, can do a better definition of organic traffic than "everybody else should do the same thing I do?"
Paid moderators, like at Facebook and YouTube, have pretty much unanimously talked about the mixture of boring and horrifying content in the business. That and working conditions, long hours, stringent requirements for behavior. Doing the job of a judge and jury, with the scheduling and pay of a telemarketer, sucks.
Volunteer moderators, who actually choose the role tend to have more specific things to say about their own point of view on the topic. They also vary quite a bit, some styling themselves as tastemakers, while others style themselves as janitors.
Both list their biggest problem being lack of trust from users. This can probably be chalked up to the users feeling like if there was corruption, it would be hard for the user to detect it. Worse, while people might be more willing to extend trust to someone when they know that they can verify it, heavy verification in the form of excessive red-tape can actually make people behave in a less trustworthy way.
A social network that doesn't suck needs content that's actually good, not just unobjectionable. Hence connoisseur moderators and intentionally-narrowly-focused forums.
It's a bit of a joke that Usenet was the world's first social network. It's not like that's much of a stretch: most of our big problems that we attribute to social media were present there. "Spam" was a term invented by Usenet posters, as was "flame war". Creating a social network that doesn't suck requires you to diverge, in a meaningful way, from that model. The world doesn't need another Usenet clone.
And, unfortunately, that means gatekeepers have to make judgement calls, which requires trust. And trust is hard to earn and will be broken by fallible humans sometimes. Social media gatekeepers are left in the impossible position where we expect them to create an environment full of good content without demeaning themselves by judging what counts as good. Intentionally separating gatekeepers from the gated creates an environment where it's impossible to work through the disagreements in a healthy and satisfying way.
The traditional justifications for gatekeeping in social media contexts, such as spam, hate speech, and fake news, are simultaneous too draconian and too lax. Too draconian because banning fake-news deprives the community a chance to negotiate its consensus reality, and too lax because banning fake-news doesn't prevent other forms of misinformation that can be just as bad. This is because of a mismatch between working definitions and practice, and because these forms of gatekeeping don't match up with what users are actually trying to get out of social media. We don't just need a space that is free of provably false content, we need a space where people are encouraged to seek the truth.
The second most important feature of any new social media site should be a different approach to gatekeeping. One where at least some of the gatekeepers are allowed to make judgement calls, and one where they can engage in a dialog with their constituents if it goes wrong, and one where someone can be reprimanded for shitposting independent of whether the low-quality post violates some specific rule. And if that doesn't scale, then it doesn't. I just don't want to spend my free time in a space that feels like the DMV.