The original title of this post was “Why Having Survivors on Your Team is Vitally Important,” but but over the last few weeks, I’ve realized that that’s not enough. That’s not the solution here.
I've noticed a lot lately that there are many apps and services that don't have the security of their users in mind - not just data privacy (which we all know is a myth), but protection from abuse or exploitation, or (at the risk of sounding like a ~~ millennial snowflake ~~) being triggered even after taking the suggested steps to avoid further abuse.
Everyone has someone they want to unfollow or block on Twitter, whether it's a troll they hate or spews harassment, a "thought leader" they find insufferable, or certain political figures they might disagree with. It's likely not harmful, in those cases, if someone sees that user in Twitter Moments or somewhere else. In other cases, though, going through all those steps doesn't seem to matter.
In this instance, I feel like I’ve used the tools Twitter has provided me. "Unsubscribe" from Twitter emails (yet mysteriously still receiving them). Block the user. Go private. I’ve done these things, yet not only do I still get the emails I’ve unsubscribed to, this person is still featured very prominently and heavily in their subject lines and body content.
What other workarounds are there? How can I further protect myself from witnessing behaviors that are often so familiar to the abuse I went through?
Right now, I can still be tagged in a Tweet with this person. It happened today. Even if I won’t see his replies to the parent message, it does seem like there should be some kind of prevention here. (@jack , if you need some user stories or potential features on resolving this, I’m happy to be a consultant.)
Anil Dash 🥭If you work at @Twitter, and have ever wondered what would happen if you report sexual harassment at work, consider this photo of your leaders sitting down with a man who jokes about his proclivity for sexually assaulting his colleagues at work. #JoinTheFlock #LoveWhereYouWork19:31 PM - 24 Apr 2019
It's even worse if the user is "verified" as they're often taken "more seriously" or look "more reputable," and how could anyone successful possibly be an abuser? (They also, allegedly, have access to priority support from Twitter.)
You already have survivors on your team. They probably aren’t flying their trauma-survival flags, nor should they need to. Employees should be trusted enough to not need to lobby or reason every one of their suggestions — and these suggestions can literally be life or death.
In order to be taken seriously and have their suggestions — often their extremely important, safety-related suggestions — you need to listen to them. It’s hard to be vulnerable, speak up and put yourself in a situation where you feel that your safety could be at risk. Even several years on from my trauma, I still find it difficult to speak up and be super honest about things that happened. I’d rather just block it out, forget that it happened, and move on to the next thing.
Social media makes that pretty hard to do, though.
It’s 2019. “Don’t use social media,” something I’ve heard more than once, isn’t an option.
There are security risks in other apps and programs we use in everyday life, too, which can be even more dangerous.
If you have a shared folder with personal information in it like banking or health information, and that relationship goes south, there was no way to revoke access to those files from the shared user. Incredible.
helencool that my fucking photos and trauma are heading art basel thx for exploiting us for “art” ANDREA BOWERS @unavailabl DO YOU KNOW HOW FUCKING INSANE IT IS TO FIND OUT MY BEAT UP FACE AND BODY ARE ON DISPLAY AS ART RN FOR RICH PPL TO GAWK AT THRU A STRANGER’S INSTAGRAM STORY17:54 PM - 11 Jun 2019
An ART BASEL exhibit showcased images taken from someone's Twitter - without their permission - and only learned about it by seeing the exhibit on Instagram. In this case, the "artist" had his Twitter account deactivated, though surely there's nothing protecting Helen against retaliation if he is to create a new account.
I’m obviously not saying you should include a qualifying question like this in your hiring process (because spoiler alert, you have this kind of diversity on your staff already) or that you should go around asking coworkers about their traumas. Don't prod them for justification on why they're looking for a certain behavior, or examples of use cases. They aren’t the only one with that concern - they're not bringing it up as an edge case. But what I am saying is that when an employee brings up security concerns about one of your features, listen to them. Believe them.