DEV Community

Discussion on: What is Apple thinking?

Collapse
 
jeehut profile image
Cihat Gündüz • Edited

Isn't it pretty clear? There's a huge difference between detecting child abuse imagery and detecting if someone is a terrorist: the false positive rate.

First, there's no way to detect if someone is a terrorist just by analyzing imagery. If I'm a soldier or a cosplayer, I might also hold a weapon in my hand and look like a terrorist in a photo. Who in the world has a valid reason to take or have child abuse photos on their phone? No one needs to see those photos, not even journalists who write about it. Only if someone sent an abused child such a photo to blackmail them, this would cause a false positive. But in that case, I believe it's even a good thing as it can help the child, so it's not even really a false positive.

Second, if for detecting if someone is a terrorist they had also looked at messages being sent, the chances would be high that journalists who write about this topic and some minorities like Muslims would have a much higher false positive rate because of the way media connects these two topics and the way our algorithms are trained. If they did that, Apple would help with discrimination against Muslims.

So, given that machine learning algorithms can detect such imagery nowadays with >99% accuracy, I can understand Apple doesn't see a privacy issue here as the usefulness for children are high and the risks for abuse are low.

By the way, the "proactive surveillance functionality" you mentioned that the FBI wanted didn't even include any kind of advanced "turn on only if necessary" feature. It was basically a backdoor for the FBI and the FBI would then decide, for whom they use it. So if someone in the FBI didn't like you, you had no defense. The algorithm used here though is reviewed by many people and there's no way to abuse it in that same way if you don't like someone.