The DevDiscuss Podcast begins with an interview and ends with commentary from listeners — and we like to feature the actual voices from our community!
To enrich an upcoming episode of the show, we'd like to know...
“How do you think tech can better protect people from harassment?”
For your chance to appear on an upcoming episode, answer the question above by:
Calling our Google Voice at +1 (929)500-1513 and leave a message 📞
Sending a voice memo to pod@dev.to 🎙
OR, leaving a comment here (we'll read your response aloud for you) 🗣
Please send in your recordings by TOMORROW, Wednesday, February 10th at Midnight, ET (9 PM PT, 5 AM UTC)
Voice recordings will be given priority placement 😉
Don't forget to check out the most recent season of DevDiscuss here
Top comments (11)
I believe a better way to cut down on online harassment and trolling, is to make trolling harder to find.
Take twitter for example, if a user "trolls" other user's often, instead of banning them you leave them in the system, but make it so their posts aren't shown to others unless they are sought out. This wont work for large scale popular people, but it should work for smaller scale cases over time.
In this sense trolling/harassment becomes more "hidden" to the general public.
No idea how practical such a system would be, but I've heard of similar approaches in other environments.
That's just shadowbanning, which is already being done by many platforms.
It sounds like a good feature at first, but it's actually a really bad idea, considering that it's a) easy to deny and b) hard to notice in the first place.
Combine that with how most of these shadowbans usually happen automatically or based on user flagging (both of which are far from perfect), and you have a perfect recipe for tons of people being excluded without their knowledge or any mechanism for complaining without any reason.
Yes, I believe the term Twitter uses is "Shadowban".
The key thing is that the user thinks they are successfully trolling, but isn't aware that their posts don't get the audience they used to get. Facebook does something similar, where groups which have a history of spreading fake news are promoted to other users by Facebook's algorithm less often.
Really interesting. Thanks for sharing! If you're interested in sharing these thoughts as a voice recording, we'd love to feature your voice on this episode! Instructions to share a recording above. It's super quick and simple :)
As DarkWiiPlayer mentioned, many platforms are already doing this.
The root of the problem is being able to easily churn out another Twitter account.
I think there's a couple tech solutions that can help here. Giving users the ability to control who can and can't reply to a post is a good first step that Twitter has already implemented. A good extension of that would be user customizable content filters. This goes for both comment replies, for instance "replies which contain certain language are quarantined and don't notify"; direct messages, i.e. "quarantine and don't notify me if this message trips the sexual harassment filter"; and feeds, i.e. "don't show me content that contains violent language".
Some variations of these have already been implemented in certain places. But I think the key elements here are that it's user customizable, so different users can determine the level of exposure they're comfortable with, and it's focused on content rather than keywords. Pure pattern-matching blacklists are a good start, but can't possibly hope to scale for the complexity of ever-evolving human language.
I think the other important element is quarantine vs removal. There are certain things that should definitely be removed, but a lot of cases are borderline. This means, invariably, that some good stuff gets removed and bad stuff stays up. Adding an intermediate option and a level of user-customized moderation might help reduce harassment and toxicity without intensifying censorship concerns.
The biggest problem staring us in the face at the moment is abuse through the social media platforms. Namely Twitter.
In the UK at the moment, there is an outpouring of abuse mainly in the way of racism which is being aimed at soccer players when they've performed poorly. I don't doubt that the issue of racism is still very much a problem but I don't believe it's gotten worse from 20 years ago. It's just become easier to stand out from a crowd with vile abuse.
The anonymity that you have through social media needs to be stopped. This could quite easily be done by making a cell phone number compulsory for social media accounts.
One potentially controversial way might be to reduce harassing comments or messages via natural language processing. When someone writes any kind of harassing language, the platform flags it, and requires the user to acknowledge it would be harmful. If users continue to always continue sending that message regardless of the prompt overtime, it could be a data point on if that user should be removed from the platform entirely.
Give people an easily accessible word blacklist so they can just opt out of getting any message that contains certain words. I don't know why this isn't already a feature on every platform (extra points for regex support 😉)
I hope one day we can have tech as impartial judge on such matters. Maybe in a form of "harassment" blocker, hiding potentially offensive content behind a warning. Or maybe we will come up with a way to deal with the underlying issues, instead of dealing with the consequences
Hi Valeria! We'd love to hear this comment (or a similar one) as a voice recording if you're willing. This way, we can feature your actual voice on the show! If that's of interest, the instructions are above :)