On January 9, 2020, Robert Williams pulled into his driveway in Farmington Hills, Michigan, after a long day at work. Detroit police were waiting. They arrested him in front of his wife and two young daughters, handcuffed him, and drove him to a holding cell. He spent 30 hours in custody before charges were dropped.
The crime he allegedly committed: stealing watches from a Shinola store in 2018. The evidence against him: a still frame pulled from a grainy surveillance video, run through a facial recognition system that returned his name. The problem: the system was wrong. Robert Williams did not steal those watches. A detective had manually confirmed the AI match by comparing the blurry image to his driver's license photo — a process critics describe as circular validation of a flawed result.
Williams became the first documented case of a wrongful arrest caused by facial recognition. He was not the last. He was not even the last that year. And the systems that put him in handcuffs are still running.
How the Machine Sees Your Face
To understand what happened to Robert Williams, you have to understand what a facial recognition system actually does — and what it cannot do.
Modern facial recognition uses convolutional neural networks, a class of AI architecture that processes images by detecting edges, shapes, and spatial relationships across millions of pixels. When a network processes a face, it doesn't "see" a person the way a human does. It converts the geometry of facial features — the distance between pupils, the angle of the jawline, the depth of the brow ridge — into a numerical array called an embedding. A typical modern system compresses a face into a 128-dimensional vector: a string of 128 floating-point numbers that encodes the unique mathematical signature of that face.
That number is your faceprint.
Unlike a password, you cannot change it. Unlike a fingerprint, it can be captured from a distance, without contact, without your knowledge, from any photograph ever taken of you. Once encoded into a database, your faceprint becomes a permanent, searchable artifact that exists whether you consent to its existence or not.
The accuracy of the system depends entirely on who trained it. Most commercial facial recognition systems in the 2010s were trained predominantly on datasets of lighter-skinned, male faces — because the internet, at the time, skewed toward photos of lighter-skinned, male faces. The bias baked into the training data propagated directly into the deployed system.
The National Institute of Standards and Technology quantified this in its 2019 Face Recognition Vendor Test, the most comprehensive independent audit of facial recognition accuracy ever published. The FRVT examined 189 algorithms from 99 developers across 18.27 million photos. The findings were stark: false positive rates — the rate at which the system incorrectly matched two different people — were 10 to 100 times higher for Black and Asian faces than for white male faces. For Black women, several systems produced false positives at rates nearly 100 times higher than for white men. The systems didn't fail equally. They failed disproportionately, and predictably, along racial lines.
This is the technology that put Robert Williams in handcuffs.
Clearview AI: A Database Built Without Permission
In late 2019, a startup called Clearview AI was operating mostly in the shadows. Its founder, Hoan Ton-That, had spent years building a system with a simple and unprecedented premise: scrape every face from the public internet, index it, and sell search access to law enforcement.
By the time a New York Times investigation exposed the company in January 2020, Clearview had already accumulated more than 3 billion images. That number has since grown to over 50 billion. The sources: Facebook, Instagram, LinkedIn, Venmo, YouTube, Twitter, news sites, personal blogs, real estate listings, and anywhere else a face had ever been posted publicly. No consent was requested. No notice was given. No opt-out existed.
Clearview sold access to more than 3,000 law enforcement agencies, including ICE, the FBI, the Secret Service, and hundreds of local police departments. Detectives could upload a photo of an unknown suspect — or a bystander, or a protestor, or anyone — and receive a grid of potential matches with links to the original web pages where those photos appeared.
The company's investor connections included figures linked to Peter Thiel's network, and it operated for years in a legal gray zone that no regulator had yet moved to close. In the United States, no federal law prohibits scraping public photos. No federal law requires consent before building a biometric database from publicly posted images.
Other countries moved faster. The UK's Information Commissioner's Office fined Clearview £7.5 million in May 2022 and ordered it to delete all data on UK residents. Australia's Privacy Commissioner ordered deletion of all Australian data. Canada's Privacy Commissioner found Clearview had violated federal law. France's CNIL issued a €20 million fine. Italy's Garante: €20 million.
In the United States, Clearview faced its most consequential legal challenge under the Illinois Biometric Information Privacy Act — a lawsuit that threatened to generate per-violation penalties large enough to threaten the company's existence. The settlement amount was not publicly disclosed, but the ACLU, which brought the case, called it a "landmark." Clearview agreed to never sell access to private individuals or businesses in the US and to limit law enforcement use — but the company continues to operate, continues to grow its database, and continues to sell to government clients.
Your face is in there. Almost certainly.
Three Men, Three Jails, Zero Accountability
The wrongful arrest of Robert Williams had predecessors and successors. Taken together, these cases form a pattern that is not coincidental.
Nijeer Parks was arrested in January 2019 in New Jersey for an incident he had nothing to do with. A man had used a fake ID with Parks' information during a shoplifting incident and subsequent altercation with police. Investigators fed a photo from the incident into a facial recognition system. The system returned Parks' name. Parks was arrested, charged, and spent 10 days in jail before he could produce evidence — hotel records, cash withdrawals, other documentation — that he had not been in the town where the crime occurred. The charges were eventually dropped. Parks filed a lawsuit against Woodbridge Township and the Atlantic County Prosecutor's Office.
Michael Oliver's case came in 2020 in Detroit, months after Robert Williams. Oliver was accused of grabbing a teacher's phone and throwing it at a car during a road rage incident. Investigators used facial recognition. The system matched Oliver. He was charged with felony assault. He spent nearly a year fighting the charge before investigators reviewed the original video and concluded the identified person did not match Oliver. His case was dismissed.
All three men were Black. All three were identified by facial recognition systems that NIST had already documented as disproportionately inaccurate on Black faces. In none of the three cases did a vendor face legal accountability. In none of the three cases did the law enforcement agency publicly commit to ending its use of the technology.
The Detroit Police Department, to its credit, eventually adopted a policy prohibiting arrest based solely on facial recognition — requiring human investigation to corroborate any match. This was not a legal requirement. It was a departmental policy. It can be changed by the next police chief.
Amazon Ring and the Crowdsourced Surveillance Grid
While Clearview was building a database from scraped photos, Amazon was building something more ambitious: a real-time surveillance network embedded in the architecture of American neighborhoods.
Amazon Ring sells doorbell cameras and home security systems. By 2022, more than 10 million Ring devices were active in the United States. Each device is a camera pointed at a public space — a driveway, a sidewalk, a street — recording continuously whenever motion is detected. The footage is stored on Amazon's cloud servers.
Ring also operates a social media application called Neighbors, which allows residents to share footage and reports of "suspicious activity" in their communities. Civil liberties researchers at the Electronic Frontier Foundation documented the predictable consequence: Black and brown pedestrians, delivery workers, and neighbors were disproportionately flagged and shared as suspicious on the platform.
But the more consequential arrangement was between Ring and law enforcement. By 2022, Amazon had established formal data-sharing partnerships with more than 1,500 police departments. Under these partnerships, police could request Ring footage from residents in a specified geographic area through a dedicated law enforcement portal — without a warrant. Residents could decline, but the opt-out mechanism was not always clearly communicated, and many residents who complied did not fully understand that they were providing footage to police.
The EFF's investigations documented cases where police used Ring's law enforcement portal to request footage from dozens of homes in a single sweep, effectively constructing a comprehensive video record of movement through a neighborhood without obtaining a single warrant. The Fourth Amendment protection against unreasonable searches applies to government action — but Amazon, a private company, was acting as the intermediary, and courts had not yet established clear rules for how warrant requirements applied to this arrangement.
Under public pressure, Amazon announced in 2022 that it would end automatic sharing of Ring footage with police absent a warrant or user consent. But by that point, years of footage had already been shared, and police departments had established workflows around the platform that continued under the revised terms.
The deeper architectural reality goes largely unexamined: Ring cameras, connected through Amazon's Sidewalk mesh networking protocol, form a distributed surveillance infrastructure across entire residential neighborhoods. Individual cameras are nodes in a persistent, city-wide visual record. Clearview's faceprint database, applied to footage from that mesh, would constitute continuous, real-time identification of every person who walks past a Ring camera. These systems have not been formally integrated — but they are compatible, and the technical barriers to integration are not significant.
Emotion AI and the Hiring Screen
The biometric economy did not stop at law enforcement. It moved into the job market.
HireVue, a Utah-based HR technology company, offered enterprise clients a video interview platform that went beyond recording candidates. Its AI system analyzed facial expressions, vocal tone, word choice, and micro-movements during video interviews, generating scores it claimed predicted future job performance. More than 700 companies used HireVue, including Unilever, Goldman Sachs, and Delta Air Lines. Millions of job applicants were evaluated by a system they typically didn't know existed.
The scientific foundation for this approach — known as emotion recognition AI — is contested to the point of near-rejection in peer-reviewed literature. A comprehensive 2019 review by the Association for Psychological Science, authored by Lisa Feldman Barrett and colleagues, examined more than 1,000 studies on the relationship between facial expressions and internal emotional states. Its conclusion: facial movements do not reliably reflect underlying emotions, and there is no scientific basis for inferring someone's feelings, intentions, or personality from their facial expressions. A person who doesn't smile during a job interview is not necessarily less enthusiastic than one who does. They might be nervous. They might be neurodivergent. They might simply not perform emotion visibly in the way the system was trained to reward.
HireVue faced an Illinois Biometric Information Privacy Act class action lawsuit. The case settled in 2021 for approximately $19 million. The company subsequently announced it would discontinue its facial analysis feature — though it continues to offer AI-based interview scoring through other modalities. The EEOC has issued guidance on AI-based hiring tools and their potential to generate disparate impact discrimination, though the regulatory framework remains underdeveloped.
Affectiva, a pioneer in commercial emotion recognition AI, was acquired by SmartEye in 2021 for $73.5 million. The technology continues to be used in automotive applications and market research. The platform iMotions markets emotion AI tools to enterprise clients for consumer research and education. The science has not improved to match the commercial appetite.
Illinois BIPA: The Law That Actually Has Teeth
In 2008, the Illinois state legislature passed the Biometric Information Privacy Act. It was not a response to facial recognition — the technology was barely deployed at the time. It was a response to a timekeeping system at a Chicago factory that required workers to scan their fingerprints. The workers hadn't consented. The legislature decided that biometric data deserved special protection, because unlike a name or an address, biometrics cannot be changed if they are compromised.
BIPA requires that any entity collecting biometric data from Illinois residents must first obtain written consent, explain how the data will be used, and establish a retention and destruction policy. Violations carry statutory penalties: $1,000 per negligent violation, $5,000 per intentional or reckless violation. Critically, BIPA includes a private right of action — individuals can sue without having to prove they suffered concrete harm beyond the violation itself.
This provision made BIPA the most consequential privacy law in American history for biometric data. The class action cases it enabled have produced settlements that have reshaped industry practices:
Facebook paid $650 million in 2021 to settle a BIPA class action over its photo-tagging feature, which used facial recognition to suggest name tags for faces in uploaded photos. The settlement remains one of the largest privacy settlements in US history. Google paid $100 million to settle a BIPA class action over face grouping in Google Photos. Snapchat settled for $35 million over its augmented reality lenses. TikTok settled for $92 million over facial recognition in its video platform. L'Oreal settled for $40 million over a virtual try-on feature that scanned users' faces.
The pattern is clear: without a private right of action and per-violation statutory damages, companies calculate that non-compliance is cheaper than compliance. BIPA changed that calculation — but only in Illinois. Industry lobbying has successfully killed BIPA-equivalent legislation in 42 other states. Federal biometric privacy legislation has been introduced and died repeatedly. The result is a patchwork in which your face has meaningful legal protection only if you happen to live in Illinois.
Airports, Borders, and the Consent Fiction
If you have flown through a major US airport in the last three years, there is a reasonable chance your face was scanned by a federal biometric system without your meaningful consent.
The TSA's biometric boarding program — marketed as "touchless," "frictionless," and "faster" — has expanded to more than 30 airports. At participating gates, passengers approach a camera instead of a boarding pass scanner. The system matches their face to a photo pulled from their travel documents. If they match, they board. The TSA describes participation as voluntary: passengers can request manual document verification instead.
The opt-out process is not straightforward. Signage explaining the biometric process varies by airport and by gate. The CBP's Government Accountability Office audit, completed in 2023, found that CBP had misrepresented the voluntary nature of its biometric processing to some travelers. Passengers who are already in a boarding line, with a flight departing in 40 minutes, are not in a structural position to exercise a meaningful opt-out.
US Customs and Border Protection has processed all non-US citizens biometrically at borders and ports of entry since 2017. This is authorized under statute as a condition of entry. The FBI's Next Generation Identification database — the successor to its IAFIS fingerprint system — contains more than 150 million face images, including driver's license photos obtained through cooperation agreements with state DMV agencies. You may be in the FBI's facial recognition database not because you were ever arrested or investigated, but because you renewed your driver's license.
A 2021 Government Accountability Office report found that the FBI's face recognition system had been used to generate investigative leads in more than 150,000 searches. The FBI had not audited the accuracy of those leads, had not tracked outcomes, and could not determine whether incorrect matches had resulted in wrongful investigations.
China's Export Model
The most complete vision of a biometrically governed society is not in science fiction. It is in the Xinjiang Uyghur Autonomous Region of China, where the government has deployed a surveillance infrastructure specifically designed to identify and track members of an ethnic minority.
China's social credit system, imprecisely covered in Western media, is in practice a collection of regional and sector-specific scoring systems that use biometric identification to link individuals' behavior to their access to services, transportation, and employment. Continuous camera surveillance, combined with mobile device monitoring and financial transaction tracking, feeds databases that can flag individuals for travel restrictions, loan denials, and public shaming.
Hikvision and Dahua, two Chinese surveillance companies majority-owned by the Chinese government, are the world's largest producers of surveillance cameras. Both were added to the US Commerce Department's Entity List in 2019 — a designation that restricts US companies from selling them components — in part because of their role in Xinjiang surveillance. Despite this, their cameras are installed in US schools, hospitals, and government buildings, a consequence of years of purchasing before the restrictions.
Both companies exported their systems internationally. Ecuador, Zimbabwe, Pakistan, Bolivia, Venezuela, Ethiopia, and Malaysia have all deployed Hikvision and Dahua infrastructure for city-level surveillance. The export is not merely of hardware: it includes software, training, and the operational model of population-level biometric monitoring.
NVIDIA's AI chips powered significant portions of Chinese surveillance AI infrastructure until export controls tightened in 2022 and 2023. The controls have not fully severed the supply chain; enforcement remains incomplete. The same tensor processing units optimized for training large language models are architecturally suited for high-throughput facial recognition inference at the scale of a city.
Pimeyes and the End of Public Anonymity
For most of human history, the practical obscurity of public space offered a kind of structural anonymity. A face seen once in a crowd was unlikely to be identified and unlikely to be connected to a permanent record. Surveillance cameras changed this incrementally. Pimeyes eliminated it almost entirely.
Pimeyes.com is a reverse face search engine. Upload a photo — any photo of any face — and Pimeyes will search the indexed web and return links to other pages where that face appears. The service is public, requires no account for limited searches, and charges a subscription fee for unlimited access. It does not require consent from the subjects of searches. It does not distinguish between journalists conducting legitimate research and stalkers tracking a victim's movements.
Researchers and journalists have used Pimeyes to identify people photographed at public protests within minutes. Privacy advocates have demonstrated that a photo taken surreptitiously on public transit can be used to find someone's LinkedIn profile, their home address if they've been photographed near it, and their family members. Domestic violence survivors who have carefully managed their online presence find that a single photo posted by a third party — a newspaper, a school, a community organization — is sufficient to locate them.
The platform's terms of service prohibit stalking and harassment. The architecture of the platform does not impede either. Competitors including FaceCheck.ID operate on similar models. FindFace, a Russian platform, was used to identify women in adult videos without their consent and to locate them in ordinary social media profiles. The convergence of reverse face search with AI-generated imagery has created a new class of nonconsensual identification: a person photographed once, anywhere, is now potentially findable everywhere.
The Real-Time Convergence
The technologies described in this article are not isolated products. They are components of an infrastructure that, in certain configurations, approaches continuous identification of every person in a surveilled space.
The London Metropolitan Police deployed real-time facial recognition cameras on city streets beginning in 2019. Officers monitor a live feed of flagged matches as pedestrians pass camera positions. The system searches faces against a watchlist in real time, generating alerts within seconds. Privacy International and Liberty have challenged the deployments in court; the Court of Appeal ruled in 2020 that the Met's trial deployments had been unlawful due to inadequate guidance documents, but did not prohibit the technology categorically. Deployments have continued under revised policies.
In the United States, San Francisco became the first city to ban government use of facial recognition in 2019, under legislation authored by Supervisor Aaron Peskin. Several other cities followed. But state governments have in some cases moved to preempt local bans, and the patchwork of municipal prohibitions has not prevented federal agencies from operating facial recognition in cities that have banned it at the municipal level. ICE's Homeland Security Investigations division has used facial recognition in jurisdictions without public disclosure.
The convergence is the point. Clearview's database contains faceprints for a substantial portion of the adult population of the United States. Ring's mesh of cameras provides real-time video coverage of residential neighborhoods. Urban CCTV systems provide coverage of commercial and transit corridors. Real-time facial recognition applied to live feeds, searching against Clearview's database, would constitute continuous identification of individuals as they move through urban space. No single law prohibits this configuration. No single regulator has authority over all of its components.
What Actually Protects You
The practical options available to individuals are narrower than privacy advocates would prefer and broader than most people realize.
Under Illinois BIPA, Illinois residents can submit written requests to companies demanding disclosure of what biometric data they hold and requesting deletion. The right is enforceable and, given the settlement history, companies take it seriously. GDPR provides EU and UK residents with the right to erasure — the right to demand that a data controller delete their personal data, including biometric data. Enforcement is uneven but not absent: Clearview's European fines demonstrate that the mechanism has teeth. California's CPRA, which took effect in 2023, includes biometric data within its definition of sensitive personal information and grants deletion rights to California residents.
Technical countermeasures exist, with variable effectiveness. CV Dazzle, developed by artist Adam Harvey, uses high-contrast geometric makeup patterns to disrupt the facial landmark detection that most recognition systems rely on. Infrared LED arrays embedded in eyeglass frames can blind the IR sensors used in some camera systems. The Fawkes image cloaking tool, developed at the University of Chicago, adds imperceptible pixel-level perturbations to photos before they are posted online, causing facial recognition systems to encode an incorrect faceprint when they process the cloaked image. Independent tests have confirmed Fawkes is effective against several commercial systems.
What doesn't work: ordinary sunglasses, hats, and standard masks. Post-pandemic, several vendors have developed systems that perform recognition from periocular features — the eye region — or from gait, body shape, and clothing. Masks reduced the accuracy of legacy systems substantially; newer systems compensate. Physical countermeasures require deliberate effort and visible commitment in a way that most people will not sustain.
The fundamental problem is architectural: you cannot opt out of cameras you don't know about. Clearview's database was built from images taken years or decades before the company existed, in contexts where subjects had no reason to anticipate this use. A photo in a school yearbook from 2004, a news article from 2011, a vacation photo a friend posted and later deleted — these are all potential sources. The data is already out. The faceprint already exists. The question now is only who holds it and what they do with it.
The AI Assistant Vector
The biometric risk extends into a domain most users do not consider: AI tools that process images.
When you upload a photo to an AI assistant — for identification, for analysis, for any purpose — the image is processed by a vision model. That model may extract facial embeddings as part of its internal computation. Whether those embeddings are retained, logged, or associated with your account depends on the privacy architecture of the specific platform, which varies widely and changes without notice.
Apple's Photos AI, Google Lens, and ChatGPT's vision capability all process faces as part of their core function. Their data retention policies differ. Their logging practices are not always fully disclosed. Enterprise API implementations frequently retain request and response data for abuse monitoring and model improvement, unless explicit zero-retention agreements are negotiated — agreements typically available only to large enterprise clients.
The emerging frontier of biometric inference from non-biometric data compounds this risk. Gait analysis from accelerometer data can identify individuals with high accuracy from smartphone sensors alone — no camera required. Keystroke dynamics — the timing patterns of how you type — are as distinctive as fingerprints and can be extracted from any typing session. Voice prints can be captured from any audio recording that includes speech. The convergence of these modalities means that biometric identification does not require a camera pointed at your face. It can emerge from the aggregate of behavioral signals that are already being collected, passively, by devices you carry voluntarily.
The Stakes
In 1890, Louis Brandeis and Samuel Warren published a law review article in the Harvard Law Review titled "The Right to Privacy." They were responding to a new technology: the portable camera, which for the first time allowed strangers to photograph people without their consent and publish the results in newspapers. "Instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life," they wrote.
The invasion they described was trivial by current standards. The portable camera could capture a moment. It could not encode that moment into a permanent, searchable database, match it against 50 billion other moments in milliseconds, correlate it with your financial history and movement patterns, and transmit the result to a government agency — without a warrant, without notice, without the possibility of erasure.
What is at stake is not privacy as an abstract right. What is at stake is the practical ability to move through public space without being continuously identified, catalogued, and potentially subjected to the errors of systems that are wrong 100 times more often when your face is Black than when it is white. It is the ability to attend a political protest without your presence being permanently recorded and searchable. It is the ability to apply for a job without having your facial geometry scored by a system whose predictive validity is unsupported by peer-reviewed science. It is the ability of Robert Williams to pull into his own driveway without being arrested in front of his daughters for a crime he did not commit.
The technology will not stop. The database will not shrink. The cameras will not come down. The question that remains is whether the legal infrastructure will be built in time to govern a technology that is already, right now, deployed at scale against populations that have no idea it exists.
The answer, as of today, is mostly no. Illinois is the exception. The rest of the country is the rule.
Key case citations: Williams v. City of Detroit (no formal civil rights case filed as of reporting); Parks v. Woodbridge Township, Superior Court of NJ; ACLU v. Clearview AI, N.D. Illinois; In re Facebook Biometric Info. Privacy Litig., N.D. Cal., No. 3:15-cv-03747; Wilcosky v. Amazon.com, Inc., N.D. Illinois. NIST FRVT 2019: NISTIR 8280. CBP GAO report: GAO-23-105491. UK ICO Clearview decision: IC-152130-T1S4 (May 2022).
Top comments (0)