DEV Community

loading...

Discussion on: How do you think tech can better protect people from harassment? Answer for the chance to appear on the DevDiscuss podcast!

Collapse
luckierdodge profile image
Ryan D. Lewis

I think there's a couple tech solutions that can help here. Giving users the ability to control who can and can't reply to a post is a good first step that Twitter has already implemented. A good extension of that would be user customizable content filters. This goes for both comment replies, for instance "replies which contain certain language are quarantined and don't notify"; direct messages, i.e. "quarantine and don't notify me if this message trips the sexual harassment filter"; and feeds, i.e. "don't show me content that contains violent language".

Some variations of these have already been implemented in certain places. But I think the key elements here are that it's user customizable, so different users can determine the level of exposure they're comfortable with, and it's focused on content rather than keywords. Pure pattern-matching blacklists are a good start, but can't possibly hope to scale for the complexity of ever-evolving human language.

I think the other important element is quarantine vs removal. There are certain things that should definitely be removed, but a lot of cases are borderline. This means, invariably, that some good stuff gets removed and bad stuff stays up. Adding an intermediate option and a level of user-customized moderation might help reduce harassment and toxicity without intensifying censorship concerns.