DEV Community

Discussion on: The problem with “you guys”

 
anabella profile image
anabella • Edited

Ain't it funny how (some) people get all dystopian when they're asked to do even a tiny effort to decrease their privilege gap?

There's no dystopian dictatorship coming to get you for not being nice to people. It's just "try to be nice to people". In this case: some people don't feel included in "you guys". Even if it were a phrase originally meant to include every member of a potentially heterogeneous group, maybe try using a word that includes them. It's that easy.

Weird how there's not many such phrases where a mostly-female term is supposed to be accepted as a wildcard for "all peoples".

Thread Thread
 
mortoray profile image
edA‑qa mort‑ora‑y • Edited

The fear of abuse should not be ignored. It's exactly these types of automated tools that prevent open discussion of sexuality, including sexual health, on many public forums. Furthermore, automated tools are already being used on sites, such as YouTube, to block content based on questionable copyright reasons.

The voice that gets hurt the most by automated filtering is the minority voice. If you open the door, even a bit, to moral filtering, it's the incumbent dogma that will become normalized. Dissenting voices will simply be drowned out.

My argument against filtering has nothing to do with what is being filtered. I'll make the same argument for any kind of automated filtering and classification.

Thread Thread
 
kmelve profile image
Knut Melvær

I think @anabella has a good point though. Those who object to this bot (and doesn't manage to reflect that they actually read my post) escalates it to being about either moral monitoring, censorship, people being “offended” or what not. They are important, challenging, and interesting points in and of themselves, but what worries me is that they also reframe the discussion and offer little acknowledgment to the experiences of those who felt the need to make this bot in the first place.

And @mortoray , the bot isn't actually censoring anyone. It only reveals itself to the user in question. It does so by presenting a proposal, with a way to learn more about why it does so. It's up to you to make the judgment, or to protest it, or ask the moderator to either remove it or whitelist you. It's only acting in the channels it's invited to. Its source code is out in the open.

Is there really not any distinction between that, and the opaque processes and technological decisions that go into something like YouTube or Facebook? Can't it be a way for a community to self-monitor according to the agreed-upon rules they've set for themselves in order to foster a productive conversation?