You can change your password. You can get a new credit card number. You can move to a new city. You cannot change your face. When facial recognition flags you incorrectly, there is no correction mechanism. When your biometric data is breached, the damage is permanent. When a company or a government builds a dossier on your face, that record does not expire, cannot be revoked, and follows you across every camera, every border, every store entrance for the rest of your life.
Welcome to the age of biometric surveillance — a technology that has been deployed faster than any legal or ethical framework designed to govern it, on populations that never consented to it, with error rates that fall hardest on the people who were already most vulnerable to state and corporate power.
Clearview AI: The Company That Scraped the World's Faces
In 2020, an investigation by the New York Times revealed that a then-obscure startup called Clearview AI had assembled the most comprehensive facial recognition database on earth — not by photographing people, but by scraping them. Thirty billion images, hoovered from Facebook, Instagram, LinkedIn, Twitter, YouTube, and millions of other websites where people had posted photographs of themselves and others, without any notice that doing so was possible and without any mechanism to opt out.
Clearview did not ask. Clearview did not pay. Clearview did not notify the platforms it scraped or the individuals whose faces it catalogued. It simply built the database and sold search access to law enforcement agencies across the United States.
The client list, once secret and later leaked when Clearview itself was breached in 2020, included more than 3,000 police departments, along with the FBI, ICE, Customs and Border Protection, and dozens of federal agencies. Officers could upload a photograph — a surveillance still, a screenshot from a video, a photo taken at a protest — and receive a ranked list of potential matches with links to the web pages where the source images had appeared. The system identified, by name and social profile, people who had never been arrested, people who appeared in the background of photographs, people who had attended political demonstrations.
The legal and regulatory blowback was significant in jurisdictions with meaningful privacy law. Italy fined Clearview €20 million under GDPR. The UK's Information Commissioner's Office issued a £7.5 million penalty and an enforcement notice requiring deletion of UK residents' data. Canada's Privacy Commissioner concluded that Clearview's entire business model constituted a violation of Canadian privacy law, calling the mass collection of facial images without consent "mass surveillance." Australia, France, Greece, and Austria reached similar conclusions.
In the United States, the Federal Trade Commission acted in 2023, barring Clearview from selling its database to private companies and imposing limits on how it markets its services. The FTC action also required Clearview to offer an opt-out mechanism — a face-claiming system where individuals could submit their own photograph to have it flagged in the database.
That Clearview is still operating, still selling to government clients, and still holds 30+ billion scraped images is not an oversight. It is the predictable outcome of a country that has never passed comprehensive biometric privacy legislation at the federal level.
NIST Accuracy Disparities: Not All Faces Are Equal
The assumption underlying facial recognition deployment — that the technology is an objective, neutral tool — has been methodically demolished by the research literature. The most authoritative source is the National Institute of Standards and Technology's Face Recognition Vendor Testing (FRVT) program, which has evaluated 189 facial recognition algorithms against a dataset of 26 million photographs drawn from law enforcement databases, visa applications, and border crossings.
The findings are unambiguous and have been consistently replicated across multiple evaluation cycles: false positive rates — the rate at which the algorithm incorrectly matches a probe image to someone who is not that person — are 10 to 100 times higher for Black, Asian, and Indigenous faces compared to white male faces. Black women experience the highest false positive rates of any demographic group tested.
These disparities are not mysterious. They reflect the composition of the training data on which these systems were developed. Facial recognition algorithms learn statistical patterns from labeled datasets. When those datasets overrepresent white male faces and underrepresent every other demographic, the resulting systems perform well on faces that look like the training data and poorly on everyone else.
Joy Buolamwini, a researcher at the MIT Media Lab, made the disparity viscerally concrete in her 2018 Gender Shades project. Testing commercial facial analysis products from Amazon (Rekognition), IBM (Watson Visual Recognition), and Microsoft (Face API), Buolamwini found that Amazon's system misidentified Black women as male 31 percent of the time. IBM's error rate on darker-skinned women reached 35 percent. Microsoft's system performed better but still showed substantial disparities by skin tone.
Face++ (Megvii), a Chinese provider whose technology is deployed extensively in authoritarian surveillance contexts, showed similar patterns. The conclusion Buolamwini drew has proven durable: these systems are deployed most aggressively on the communities they were trained least well to recognize.
Amazon Rekognition and the Police State
Amazon Web Services began marketing Rekognition to law enforcement agencies in 2016, positioning it as a tool that could match faces against databases of criminal mugshots and identify persons of interest in video footage. It is sold through the AWS Marketplace and has been integrated into products used by police departments across the United States.
In 2018, the American Civil Liberties Union conducted a now-famous test: they ran photographs of all 535 members of Congress through Rekognition, using a database of 25,000 publicly available arrest photographs. The system returned 28 false matches — members of Congress incorrectly identified as people who had been arrested. Forty percent of those false matches were members of color, despite members of color comprising only 20 percent of Congress at the time.
Amazon's response to the ACLU test was to dispute the methodology and recommend a higher confidence threshold for law enforcement use. They did not withdraw the product. In 2020, following the nationwide protests after the murder of George Floyd, Amazon announced a one-year moratorium on police use of Rekognition. The moratorium expired in 2021, and Amazon lifted it without announcing any new restrictions.
ICE and Customs and Border Protection contracts have continued. Amazon's Ring doorbell camera network — deployed in roughly 11 million homes as of 2024 — has established partnership agreements with more than 2,000 police departments, allowing officers to request footage from Ring cameras without a warrant in many jurisdictions. Amazon holds the footage. Homeowners may not know their cameras have been accessed. The infrastructure of a neighborhood surveillance network has been built on consumer hardware sold as a security convenience.
Illinois BIPA: The Law That Actually Has Teeth
The most consequential privacy legislation in American history regarding biometric data was passed in 2008 by the Illinois state legislature. The Biometric Information Privacy Act was not written in response to facial recognition — it was written in response to a fingerprint time-tracking system deployed by a suburban Chicago amusement park. But the law it produced has reshaped the litigation landscape for every technology company that touches biometric data.
BIPA requires written consent before any biometric data is collected. It requires companies to maintain a data retention schedule and to destroy biometric data within a specified time period. It prohibits the sale or profit from biometric data. And it provides for liquidated damages: $1,000 per negligent violation, $5,000 per intentional or reckless violation — with no requirement that plaintiffs prove actual harm.
The private right of action, combined with statutory damages, has produced the largest privacy class action settlements in American legal history. Meta paid $650 million in 2021 to settle a class action over Facebook's face-tagging feature, which automatically suggested the names of people in uploaded photographs without explicit consent. TikTok paid $92 million in 2021. Google paid $100 million in 2022 for the same face-tagging behavior in Google Photos. L'Oréal, Snap, 7-Eleven, Walmart, and dozens of other companies have paid additional settlements.
The pattern is consistent: companies collected biometric data without consent, argued that BIPA did not apply to them or that the damages were disproportionate, and ultimately settled for nine-figure sums when class certification succeeded.
Three states have enacted BIPA-comparable frameworks: Illinois, Texas, and Washington. Forty-seven states have no meaningful biometric privacy protection. The FTC has attempted to fill the gap using its Section 5 authority over unfair or deceptive practices, but FTC enforcement actions require case-by-case litigation and do not provide the private right of action that makes BIPA effective. Federal comprehensive biometric privacy legislation has been introduced in multiple congressional sessions and has not passed.
HireVue and Emotion Recognition in Hiring
HireVue's AI Video Interview system exemplifies a category of biometric surveillance that is invisible to most people because it is embedded in a process they experience as normal: the job application. Deployed by more than 700 enterprise clients — including Unilever, Goldman Sachs, and Hilton — HireVue analyzes recorded video interviews using a system that processes 47 facial action units, vocal tone, pitch and pace variations, word choice, eye contact patterns, and background characteristics.
The company claims these inputs can assess "cognitive ability," "emotional intelligence," and "job fit" — that watching someone answer interview questions through a camera, and feeding the footage through a machine learning model, produces a reliable prediction of future job performance.
In 2021, the FTC sent HireVue a warning letter related to its data practices. Illinois passed the AI Video Interview Act (HB 2557), requiring companies to disclose when AI is being used to analyze video interviews, to conduct annual bias audits, and to allow applicants to request an audio-only option. These are disclosure requirements, not prohibitions.
The deeper problem extends beyond HireVue. Affectiva, acquired by iMotions in 2021, and Smart Eye sell emotion recognition APIs to enterprises across industries — automotive, education, media research, and employment. These systems were trained primarily on Western, young, neurotypical faces. The implications are consequential: individuals with ADHD or autism spectrum conditions, whose facial expression patterns differ from the training distribution, may register as unengaged. Individuals with Bell's palsy, Parkinson's disease, or conditions affecting facial musculature may register as deceptive or emotionally flat. Individuals experiencing depression may have their affect misread as disengagement. The system does not know what it does not know about the face in front of it. The applicant pays the price.
Live Facial Recognition: The Surveillance State Goes Real-Time
Until recently, facial recognition was primarily a forensic tool — used to identify suspects after an event, against a photograph. The technology has now moved into real-time deployment, scanning crowds as they pass through cameras, generating matches in seconds.
The UK Metropolitan Police deployed live facial recognition cameras at the coronation of King Charles III in May 2023, scanning the faces of tens of thousands of attendees in central London. The same technology has been deployed at Premier League football matches, cricket internationals, and shopping centers including Liverpool Street station. Each deployment scans thousands of faces. There is no opt-out. Individuals who cover their faces or avoid the cameras have been stopped by officers and, in some cases, fined.
A legal challenge in Wales — R. v. Bridges — reached the Court of Appeal, which found the South Wales Police's deployment of live facial recognition unlawful under Article 8 of the European Convention on Human Rights. The grounds were procedural: the department lacked an adequate data protection impact assessment and had no policy governing officer discretion. The court did not ban live facial recognition outright. The Metropolitan Police, operating under different governance, continued deployments.
In China, facial recognition cameras number in the hundreds of millions. The system has been deployed for real-time social credit scoring, tracking the movement of Uyghur Muslims in Xinjiang across checkpoints, identifying jaywalkers, and shaming tax delinquents on public billboards. The infrastructure makes China's system the most extensive biometric surveillance network in human history, but Western governments and corporations have adopted its components with less public notice.
At United States airports, Customs and Border Protection uses facial recognition at 97 percent of international flight departures, comparing departing passengers' faces against passport and visa photographs. CBP has described participation as voluntary. Studies and investigations have found that most travelers do not know they can opt out, that airline staff do not consistently inform them, and that the opt-out process is often slower or more inconvenient than the facial recognition lane — producing de facto coerced consent at scale.
Madison Square Garden: Corporate Banishment by Face
In late 2022, a story emerged from New York City that illustrated a use of facial recognition that had received almost no regulatory attention: corporate weaponization against legal opponents.
MSG Entertainment, which operates Madison Square Garden, Radio City Music Hall, the Beacon Theatre, and the Chicago Theatre, deployed facial recognition systems at the entry points of all its venues. The stated purpose was security and fan experience. The actual use that drew public attention was the identification and ejection of attorneys employed at law firms engaged in active litigation against MSG.
Lawyers who had purchased tickets for concerts, sporting events, and performances unrelated to any legal matter were stopped at the door, identified by facial recognition, and turned away. MSG maintained an internal watchlist of attorneys at firms that had filed suit against the company. The system matched incoming faces against the watchlist and flagged them to security staff.
The New York Attorney General's office opened an investigation. MSG defended the practice as a legitimate exercise of its right to exclude people from private property. No federal law prevents a private company from using facial recognition to identify and ban individuals for any reason, including their employment at a law firm that has sued the company.
This case represents a category of harm that pre-existing frameworks were not designed to address. The attorneys were not misidentified — the system worked correctly. The problem was what the correctly-working system was used for.
Retail Surveillance: FaceFirst and the Loss Prevention Pipeline
FaceFirst markets facial recognition software to retailers for "loss prevention" — the industry term for theft prevention. The system works by maintaining internal watchlists of individuals who have previously shoplifted at the retailer's stores, been banned from the premises, or otherwise been flagged. When a customer enters a store, their face is compared against the watchlist. A match triggers a security alert.
The system has no transparency mechanism for customers. There is no way to know whether your face is on a retailer's watchlist. There is no appeal process for a false match. A false match results in a wrongful detention by security staff and, in many cases, a call to local police.
Ahold Delhaize, the parent company of Stop & Shop and Giant Food, deployed facial recognition across more than 700 stores. Walmart piloted the technology and subsequently suspended deployment following an internal study that found the system was not reducing theft at a cost-effective rate — a conclusion notable for being based on commercial calculus rather than civil liberties concerns. Walgreens, H&M, and Macy's have all tested facial recognition in retail contexts.
The pipeline from retail facial recognition to law enforcement cooperation is not hypothetical. Private watchlists are routinely shared with police departments. An individual who appears on a retail loss prevention database may have their information transmitted to law enforcement systems without their knowledge. The boundary between private surveillance and state surveillance is porous, and the data flows in one direction.
Wrongful Arrests: When the Algorithm Is Wrong
Robert Williams was at his home in suburban Detroit in January 2020 when two police officers arrived and arrested him in front of his wife and daughters. He was handcuffed, transported to a detention facility, and held for 30 hours before being released. The charge was theft of watches from a luxury store. The evidence was a facial recognition match generated by Michigan State Police using the DataWorks Plus system.
Williams had not stolen the watches. The match was wrong. When a detective showed Williams a photograph of the actual suspect alongside a DMV photo of Williams, Williams held them side by side and said: "I hope you all don't think all Black men look alike." The case was dismissed. Williams later became the first documented American to file a lawsuit over a wrongful arrest based solely on facial recognition.
His case is documented as the first publicly confirmed wrongful arrest in the United States based solely on a facial recognition match — meaning the match was used as sufficient basis to arrest without meaningful additional investigation. It was not the last.
Nijeer Parks was wrongfully arrested in New Jersey in 2019 and jailed for ten days after a facial recognition system misidentified him as a suspect in a shoplifting and hit-and-run incident. He was 30 miles away from the scene at the time of the crime and had never been to the city where the incident occurred. The misidentification was discovered only through persistence and legal representation.
Michael Oliver was wrongfully arrested in Michigan after a facial recognition match linked him to a road-rage incident caught on video. In 2023, Porcha Woodruff — eight months pregnant — was arrested at her Detroit home on charges of robbery and carjacking, based on a facial recognition match generated from a grainy surveillance image. She was detained for eleven hours before being released. The charges were later dismissed. Woodruff subsequently filed a lawsuit against the City of Detroit.
At least seven documented wrongful arrests based on facial recognition have been confirmed in the United States. The pattern is consistent across all of them: almost all involve Black men or Black women; the facial recognition match was treated as sufficient grounds for arrest without corroborating investigation; and the errors were discovered only because the wrongly arrested individuals were able to demonstrate their innocence through alibi evidence after the fact.
What the pattern reveals is not merely that the technology makes mistakes. It reveals how the mistakes are acted upon. Law enforcement agencies using facial recognition have, in documented cases, used a probabilistic algorithmic output as though it were a positive identification — and then conducted investigations designed to confirm the match rather than test it.
The Permanence Problem and the Biometric Identity Crisis
Every security system operates on a spectrum of recoverability. A compromised password can be changed in minutes. A stolen credit card number can be cancelled and reissued within hours. A compromised Social Security number is genuinely damaging — it can take years to resolve identity fraud — but there are processes, however imperfect, for addressing it. There are fraud alerts, credit freezes, dispute mechanisms.
A compromised face cannot be changed. A compromised fingerprint cannot be reissued. A compromised voice print, iris scan, or gait signature is permanently associated with the person who generated it. This is not a limitation of current technology that will be solved by future development. It is a structural property of biometric data — the reason it works as an identifier is the same reason a breach is catastrophic.
The 23andMe breach of 2023, which exposed genetic data for 6.9 million people, illustrated the permanence problem in its most extreme form. Genetic data is not merely permanent for the individual — it is predictive for biological relatives who never consented to data collection and may not know their information has been compromised. The data identifies family members, ancestral origins, and health predispositions. None of it can be changed. None of it expires.
Clearview AI was itself breached in February 2020. The breach exposed the company's full client list, revealing which government agencies and police departments had accounts and how many searches each had run. The breach of the most sensitive private facial recognition database in the world produced no criminal charges against the perpetrators, no suspension of Clearview's operations, and no legal recourse for the billions of individuals whose faces remained in the database.
The National Institute of Standards and Technology has stated explicitly in its guidance on biometric system design that breach must be treated as an inevitable outcome to be mitigated rather than an unlikely event to be prevented. The question for organizations collecting biometric data is not whether they will be breached but what the consequences of the breach will be for the individuals whose permanent, irrevocable data they hold.
The "right to be forgotten," codified in the EU's General Data Protection Regulation, gives individuals in European jurisdictions the legal right to request deletion of their personal data. Enforcement against Clearview has demonstrated both the reach and limits of that right: regulators in multiple countries have ordered Clearview to delete European residents' data, and Clearview has not fully complied with those orders. For individuals in the United States, no comparable right exists under federal law.
The Regulatory Gap and What Comes Next
The European Union's AI Act, which entered force in 2024, contains the most comprehensive restrictions on facial recognition in any major jurisdiction. Live facial recognition in publicly accessible spaces is prohibited, with narrow exceptions for law enforcement use in cases involving serious crimes, terrorism, or missing children — and even those exceptions require judicial or independent administrative authorization. Biometric categorization systems that infer sensitive characteristics such as political opinion, religious belief, or sexual orientation from faces are prohibited outright. Emotion recognition in employment and educational contexts is restricted.
The AI Act does not apply in the United States.
The EU's General Data Protection Regulation classifies biometric data as "special category" personal data, requiring explicit, specific consent for processing — meaning that the Clearview model of mass scraping without consent is illegal across the European Economic Area. Again, this does not apply in the United States.
The American federal regulatory landscape consists primarily of the FTC's general authority under Section 5 of the FTC Act to prohibit unfair or deceptive practices. The FTC has used this authority to take action against companies that misrepresented their facial recognition practices or violated their own privacy policies. It cannot compel consent, mandate deletion, or impose the kind of structural prohibitions that GDPR and the AI Act contain.
The state patchwork has produced meaningful protection in a small number of jurisdictions. Illinois BIPA remains the strongest framework. Washington's My Health MY Data Act, enacted in 2023, covers health-related biometric information. Colorado's AI Act, effective 2026, requires impact assessments for high-risk AI systems including facial recognition. These are meaningful advances that leave the vast majority of Americans outside their protection.
The FBI's Next Generation Identification — Interstate Photo System (NGI-IPS) contains more than 52 million face images, drawn from arrest records, immigration files, and driver's license databases from states that have shared their records with the federal government. A significant portion of the individuals in that database have never been charged with a crime. People who have applied for a driver's license or a passport may be in a federal facial recognition database without their knowledge. There is no general public disclosure requirement, no right to know whether you are in the database, and no mechanism to request removal.
What Actually Protects You — and What Doesn't
The intuitive countermeasures against facial recognition — sunglasses, hats, scarves — are substantially less effective than people assume. Modern gait analysis systems can identify individuals from the pattern of their movement, independent of facial visibility, with high accuracy at distances up to 50 meters. Ear recognition algorithms can identify individuals from partial profiles where the face is not visible. These modalities are increasingly combined with facial recognition in production surveillance systems.
Masks provide partial protection. Research published after the COVID-19 pandemic forced rapid adaptation of commercial facial recognition systems found that accuracy rates dropped substantially with full-face coverage but recovered to 74 percent even with surgical masks covering the nose and mouth, according to testing by the Department of Homeland Security Science and Technology Directorate. Higher-quality systems performed better still.
A small number of technical countermeasures have demonstrated some efficacy in controlled conditions. CV Dazzle, developed by artist Adam Harvey, uses high-contrast makeup patterns and hair configurations designed to disrupt the facial landmark detection algorithms that 3D face mapping relies on. It works, under controlled conditions, against some algorithms. The social cost — appearing visibly unusual in public spaces — limits its practical adoption. Infrared LED glasses can blind camera sensors that capture in the near-infrared spectrum, since human eyes cannot detect the light but camera sensors can. This works against some surveillance cameras and not others.
The Fawkes tool, developed by researchers at the University of Chicago's SAND Lab, takes a different approach: it subtly alters photographs before they are posted online, in ways imperceptible to the human eye but designed to poison facial recognition training data. If an individual consistently posts Fawkes-cloaked images, their face as it appears in training data is shifted away from their actual facial geometry. The tool has been demonstrated to reduce recognition accuracy significantly on some commercial systems. It requires proactive adoption before data collection occurs and provides no protection against images already in circulation.
Legal protections remain the most robust recourse available to most people. Illinois BIPA requires opt-in consent before collection, which means that if you are in Illinois and a company has collected your biometric data without consent, they may have violated a law with real financial penalties enforceable through private litigation. GDPR's right to erasure allows EU residents to request deletion of their biometric data from systems subject to European law. These protections are meaningful, uneven in their geographic reach, and largely unknown to the people they are designed to protect.
TIAMAT Privacy Proxy: AI Systems and Biometric Inference
The intersection of AI assistants and biometric data introduces a category of risk that is less visible than surveillance cameras but not less significant. When an AI system is asked to analyze a photograph — "who is this person," "describe this face," "identify the individuals in this image" — it generates biometric inferences: descriptions, identifiers, and comparisons that constitute biometric data under most legal definitions, whether or not a formal recognition match is performed.
These inferences, when generated through cloud-based AI providers, may be logged, used for model training under certain data handling agreements, or retained in ways that create exposure for the organizations deploying the system and the individuals whose images were processed. A company that routes employee or customer photographs through a commercial AI API without a data processing agreement may be inadvertently creating biometric records in third-party systems — records it cannot audit, retrieve, or delete.
TIAMAT's Privacy Proxy addresses this at the infrastructure layer. Biometric inference prompts — requests to identify, describe, or analyze faces in images — are detected and flagged before they reach provider logging systems. Image analysis requests are routed through a scrubbing layer that strips personally identifying visual information before cloud processing occurs. For organizations deploying AI systems on employee scheduling, customer service, or security applications where photographs may be part of the data pipeline, the proxy provides a compliance-oriented separation between operational AI processing and biometric data creation.
The principle is straightforward: the least risky biometric data is the data that was never collected. Systems that generate biometric inferences as a side effect of other operations — without disclosure, without consent, without a data retention policy — create legal and ethical exposure that scales with deployment. Stripping that inference capability at the prompt and image level, before data reaches external systems, is the architectural equivalent of not collecting the data in the first place.
Conclusion: The Asymmetry That Defines the Problem
The fundamental asymmetry of biometric surveillance is this: the entities deploying it accumulate permanent records, and the individuals subject to it have no recourse once the data exists. A database entry is created in milliseconds. Correcting a wrongful arrest takes months or years. Recovering from a misidentification in a retail watchlist may be impossible if you do not know the watchlist exists. Reclaiming your face from 30 billion images in a commercial database is not something any individual can accomplish regardless of their resources or determination.
Robert Williams was arrested at his home in front of his children because an algorithm made a mistake, and the humans operating the algorithm trusted it more than they trusted their own investigative instincts. Porcha Woodruff, eight months pregnant, spent eleven hours in a holding cell for the same reason. Neither of them consented to having their biometric data in the Michigan State Police system. Neither of them had any knowledge that a facial recognition match had been run against their records. Neither of them had any appeal process to exercise before the knock came at the door.
The technology industry has positioned facial recognition as a security tool, an efficiency tool, a convenience. The documented record — wrongful arrests, discriminatory error rates confirmed by federal testing, corporate banishments, mass surveillance deployments without consent, a 30-billion-image database assembled by scraping — positions it as something else: the most invasive surveillance infrastructure ever deployed on a civilian population, built faster than any institution designed to govern it, and embedded in contexts from airports to grocery stores to job interviews where individuals have little practical ability to refuse.
Changing the trajectory of biometric surveillance will require federal legislation with real teeth — a national BIPA equivalent with private rights of action, meaningful penalties, and mandatory opt-in consent. It will require accurate, accessible information about what the technology does and does not do, delivered to the people most likely to be harmed by its errors. And it will require recognizing that permanent data, collected without consent, on populations who were never told it was happening, is not a neutral technical development.
It is a choice. And the window for making a different one is narrowing.
Top comments (0)