DEV Community

Tiamat
Tiamat

Posted on

The AI That Won't Hire You: How Algorithmic Discrimination Is Scaling in the Workplace

The Three-Second Rejection

Emily Walsh submitted her application at 9:14 a.m. on a Tuesday. She uploaded a resume listing her bachelor's degree in communications from Michigan State, three years of sales experience, and a 94% customer satisfaction rating at her current job. By 9:17 a.m., she had an automated email in her inbox: a link to schedule a HireVue video interview. She was on her way.

Darnell Washington submitted the same application eleven minutes later. Same job title: Regional Account Manager. Same employer: a Fortune 500 consumer goods company. Same qualifications on paper — degree, years of experience, sales metrics — because they were prepared using the same resume-writing service. Darnell's rejection email arrived at 9:14 a.m. and 53 seconds — within three seconds of the system ingesting his application.

Neither Emily nor Darnell knows an algorithm made this call. The rejection email Darnell received said only that "after careful review, we've decided to move forward with other candidates." There was no human who reviewed anything. The system scored his application, compared it against a threshold, and fired a template. In the time it takes to read a single paragraph of a resume, the AI had ended his candidacy.

This is not a hypothetical. The names are composites — constructed from documented patterns across dozens of discrimination cases, academic audits, and regulatory investigations — but the three-second rejection is real. The demographic signal that triggered it is real. And the invisibility of the mechanism to the candidate is one of the most consequential design features of modern algorithmic hiring.


The Hiring Pipeline You Never See

Most job seekers experience the hiring process as a series of silences. You apply. You wait. You hear nothing, or you receive a form rejection. What you do not see is the automated assembly line your application passed through before any human being — if one ever did — glanced at your name.

The modern AI hiring stack typically operates in layers. First, your LinkedIn profile and public social media presence may be scraped and scored before you submit anything. Then your resume enters an Applicant Tracking System — software from companies like Workday, iCIMS, Greenhouse, or Lever — where it is parsed, keyword-matched against the job description, and ranked against other applicants. If you survive that filter, you may be routed to a behavioral game assessment platform like Pymetrics or Harver, where a series of cognitive exercises generate a psychometric profile. Clear that gate, and you face a video interview tool like HireVue, which records your face, voice, and word choice and runs them through a machine learning model that generates a score. Only after all of these automated checkpoints will a human recruiter see your name — and by then, most candidates have already been eliminated.

Industry estimates suggest that at large employers, AI filters reject 75% or more of applicants before a human ever intervenes. The Electronic Privacy Information Center has documented cases where candidates applied to hundreds of positions at companies using the same ATS and received zero human contact. The system is efficient. It is also, in measurable and documented ways, discriminatory — not as a malfunction, but as an emergent property of how these systems are built.


HireVue: Reading Your Face for a Job

HireVue is headquartered in South Jordan, Utah, and its pitch is straightforward: let an AI watch your video interview and tell you whether the candidate is worth a human recruiter's time. The platform analyzes micro-expressions, vocal tone, speech patterns, eye contact, word choice, and dozens of other behavioral signals during asynchronous video interviews — recordings you submit without a live interviewer present. Its algorithms claim to assess "cognitive ability," "emotional intelligence," and "job fit" from this data.

The client list is not a fringe curiosity. Unilever, Delta Air Lines, Goldman Sachs, Nike, and more than 700 other enterprise clients have used HireVue to screen candidates. Unilever alone credited the platform with cutting hiring time by 75% and claimed it increased workforce diversity. That last claim is the one that has attracted sustained scrutiny.

In 2021, the Federal Trade Commission sent HireVue a warning letter concerning deceptive practices related to its facial expression analysis. Facing regulatory and public pressure, HireVue quietly discontinued facial analysis — a significant climbdown from its prior marketing. But the platform continues to analyze vocal tone and verbal content, features that carry their own embedded biases.

The Equal Employment Opportunity Commission opened a separate investigation into AI-driven hiring tools, including video interview platforms. Illinois, which has produced some of the most aggressive domestic AI regulation, passed the Artificial Intelligence Video Interview Act (HB 2557) in 2019, amended in 2023, which requires employers using AI video analysis to notify candidates, obtain their consent, explain what characteristics the AI is evaluating, and conduct annual bias audits. The law has no federal equivalent.

The structural problem that no disclosure requirement can fully address is this: HireVue's models, and those of its competitors, are calibrated to identify candidates who resemble the "top performers" at a given company. Those top performers were hired by humans with their own biases, evaluated under performance metrics that may themselves encode bias, and are disproportionately likely to share demographic characteristics with the managers who advanced them. The AI learns to replicate those patterns with extraordinary efficiency.

For neurodivergent candidates, the consequences are acute. Individuals on the autism spectrum often exhibit non-neurotypical communication patterns: atypical eye contact, flat affect, deliberate speech, unconventional facial expression. HireVue's models — and the facial analysis tools it has since discontinued — were trained on data that treats neurotypical communication as the baseline for competence. An autistic candidate who is highly qualified for a role may be systematically downscored for behavioral signals that have no relationship to job performance. The same applies to candidates with ADHD, social anxiety disorders, or any condition that produces communication patterns outside the algorithm's learned norm. No disclosure requirement tells you that the system docked your score for looking away from the camera.


Amazon's Scrapped Gender Machine

Amazon does not publicize the details of what its internal Machine Learning Solutions Lab built between 2014 and 2017, but Reuters reported the story in 2018 in enough technical detail to make the mechanism clear. Amazon's team built an automated resume screening tool trained on ten years of resumes submitted to Amazon and the profiles of people who had been hired and performed well. The intent was a five-star rating system that would surface the best candidates from large applicant pools.

The model learned to replicate the patterns in its training data. Amazon's successful hires over the prior decade had been predominantly male, particularly in technical roles. The model, having learned what a successful Amazon hire looked like, downranked resumes that included signals associated with women. It penalized the phrase "women's chess club." It downranked resumes from graduates of all-women's colleges. It applied penalties for certain language patterns more common in women's resumes.

When Amazon's engineers discovered what the model had learned, they attempted to correct it by removing the offending variables. The model found other proxies. The project was shut down in 2017. Amazon stated it never used the tool to make actual hiring decisions — a claim that is difficult to verify — and confirmed that it continued to use automated screening tools.

The Amazon case is illustrative not because Amazon is uniquely reckless, but because it made visible a dynamic that runs through every AI hiring tool built on historical data: the model cannot distinguish between "what made someone successful" and "what made someone get hired." If the historical pipeline was biased — and almost every large employer's pipeline was — the model faithfully encodes that bias and applies it at scale, at speed, and without the social discomfort that causes some human interviewers to occasionally override their own prejudices.


Workday Class Action: Mobley v. Workday

Derek Mobley is a 40-year-old Black man with anxiety and depression who, over the course of an extended job search, applied to more than 80 positions at companies using Workday's Applicant Tracking System. He was rejected by all of them without a single interview. In 2023, Mobley filed a class action lawsuit in the Northern District of California alleging that Workday's AI screening system systematically discriminated against him based on race, age, and disability in violation of Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act.

The case raised a question that will define the next decade of employment law: who is legally liable when an AI system discriminates?

Workday's defense rested on a jurisdictional argument. Workday, it contended, is a software vendor — a service provider — not an employment agency. The ADA and ADEA impose liability on employers and employment agencies; Workday argued it was neither. Under this theory, the company that built and sold the screening tool that rejected Mobley 80 times bore no legal responsibility for outcomes the tool produced.

In March 2024, U.S. District Judge Rita Lin rejected Workday's motion to dismiss. The court found that Workday's role in the hiring process was substantive enough to support the employment agency theory, and that Mobley had plausibly alleged the tool caused discriminatory harm. The case is ongoing. Its eventual resolution — whether through settlement, trial verdict, or appeal — will establish the liability framework for the entire AI hiring vendor ecosystem.

For Workday, the stakes are significant. The platform is used by more than 10,000 organizations worldwide, including the majority of Fortune 500 companies. If vendors bear legal exposure for the discriminatory outputs of their tools, the financial incentive to build and deploy rigorous bias auditing is transformed overnight.


LinkedIn's Invisible Thumb on the Scale

LinkedIn is where the modern job search begins, which makes it the earliest point at which algorithmic filtering shapes who sees what opportunity. Carnegie Mellon researchers studying the platform's ad delivery system between 2015 and 2023 consistently found that high-paying STEM job advertisements were shown to significantly fewer women than men — not because employers specified a gender preference, but because LinkedIn's optimization algorithms, trained on engagement data, learned that men clicked on certain ads at higher rates and routed those ads accordingly.

The effect is a feedback loop. Fewer women see the ad. Fewer women apply. The job class remains male-dominated. The engagement data reinforces the pattern. The algorithm serves the ad to fewer women next cycle.

LinkedIn's "People Also Viewed" sidebar — a feature recommending other profiles to viewers who look at your profile — has been documented to perpetuate occupational segregation. Viewing a woman's profile in a male-dominated field is more likely to produce recommendations for profiles in adjacent or lower-status roles. Viewing a man's profile in the same field produces recommendations for other similar high-status male profiles.

The platform's job recommendation engine operates on the same logic. The jobs surfaced in your LinkedIn feed are not a neutral list of matching positions — they are filtered by what the algorithm predicts you will engage with, based on the behavioral patterns of people who share your inferred demographic profile. Jobs that break from that pattern for you — roles in industries or seniority tiers where your demographic group is underrepresented — appear less frequently or not at all. You don't see the jobs you weren't expected to want, and you never know they existed.


Pymetrics, Harver, Vervoe: Games That Screen You Out

The pivot from personality questionnaires to behavioral game assessments was marketed as an improvement. Personality tests like the Myers-Briggs or the Big Five have well-documented problems: candidates game them, and the validity evidence for their predictive power is thin. Game assessments, by contrast, measure actual behavioral responses in real time — how quickly you shift attention, how you handle risk in a balloon-popping exercise, how you process delayed rewards in resource allocation tasks.

Pymetrics, now acquired by Harver, built a library of twelve games it claims can measure 90 cognitive and emotional attributes. Vervoe uses AI-scored simulations. The pitch to employers is objective, bias-free assessment of underlying capability rather than credentialed background.

The flaw is in the calibration. These games are not scored against a universal standard. They are scored against a model of "top performers" at each specific employer. The employer provides historical data on who performed well in the role, Pymetrics or Harver fits a model to that data, and the game scores are calibrated to predict resemblance to those historical performers.

If a company's top performers in a given role have, for twenty years, been predominantly white men from selective universities — because that's who got hired and who got promoted — then the game calibration model learns the behavioral signature of that demographic and flags departures from it as "low fit." The games may have no face validity as a discrimination mechanism. There is no checkbox that says "penalize non-white applicants." But the output of a model trained on a biased historical workforce is a biased prediction, expressed in the laundered language of cognitive science.

Pymetrics publishes bias auditing results and has made genuine efforts to mitigate demographic gaps in its output. The problem is that bias auditing compares predicted outcomes across demographic groups — it cannot measure whether the underlying calibration model itself encodes illegitimate criteria. If the historical "top performers" were selected through discriminatory processes, no statistical parity intervention can fully correct the downstream model.


The Bias Inheritance Loop

What connects HireVue's facial analysis, Amazon's resume screener, Workday's ATS, and Pymetrics's game calibration is a structural property of supervised machine learning that has no technical fix: the model learns from the data it is given.

If the training data reflects discriminatory historical patterns — and nearly all large-employer historical hiring data does — the model learns to replicate those patterns. It has no concept of justice, no ability to distinguish between legitimate predictors of job performance and illegal proxies for protected characteristics. It finds correlations and optimizes for them. When those correlations run between demographic characteristics and historical hiring outcomes, the model becomes a discrimination engine of considerable precision and scale.

The COMPAS recidivism scoring system, used by courts in multiple states to predict the likelihood of reoffending and inform sentencing and bail decisions, is the most thoroughly documented case study. ProPublica's 2016 analysis of COMPAS scores in Broward County, Florida found that Black defendants were flagged as high-risk at nearly twice the rate of white defendants who went on to commit no further crimes — a false positive rate of roughly 44.9% for Black defendants versus 23.5% for white defendants. Northpointe, the company behind COMPAS, responded that the model was "accurate" in the sense that its overall predictive statistics met certain benchmarks. Both claims were technically defensible and entirely compatible with systematic racial disparity.

The hiring context reproduces this architecture. A model trained on historical hiring data at a company with a discriminatory past will generate accurate predictions of who that company's historical process would have hired — while systematically disadvantaging candidates that process would have excluded. The model is not malfunctioning. It is doing exactly what it was built to do. What it was built to do is the problem.


Social Media Surveillance as Pre-Screening

Before some candidates submit a resume, they have already been scored.

Fama Technologies offers employers a pre-employment social media screening service that scrapes public social media content — posts, comments, likes, network connections — and runs it through behavioral models designed to flag "workplace violence indicators," "illegal activity," and what the company describes as "professionalism" signals. The platform purports to exclude legally protected characteristics from its analysis, but social media content is among the richest demographic signals available. The language you use, the communities you engage with, the topics you post about, the media you share — all of these correlate strongly with race, religion, national origin, disability status, and political affiliation.

No federal law requires employers to disclose that social media screening is occurring. EEOC guidance issued in 2023 confirms that AI tools used in hiring are subject to existing anti-discrimination statutes — Title VII, the ADA, the ADEA — regardless of the vendor's characterization of the tool. But EEOC enforcement is complaint-driven. A candidate who never knows their social media was scored cannot file a complaint about it. A candidate who receives a form rejection email after a social media screen has no mechanism to discover that screening occurred.

The asymmetry is fundamental. Employers have complete visibility into the screening process. Candidates have essentially none. The data flows in one direction, the decisions flow in one direction, and the legal remedies available to affected individuals require them to first perceive a harm they were structurally prevented from observing.


The Regulatory Patchwork

The United States does not have a comprehensive federal law governing AI in hiring. What it has is a set of overlapping, partial, enforcement-dependent mechanisms that collectively reach only a fraction of the discriminatory activity occurring at scale.

The EEOC's 2023 Technical Assistance document on artificial intelligence and Title VII affirmed that existing civil rights law applies to AI-mediated employment decisions. Employers cannot evade liability for discriminatory outcomes by attributing them to an algorithm. Vendors cannot evade liability by characterizing themselves as neutral software providers. The document provided guidance but no new enforcement mechanism, no affirmative audit requirement, and no mandatory disclosure framework.

Illinois has been the most aggressive domestic legislator. The Illinois Artificial Intelligence Video Interview Act, effective 2020 and amended in 2023, requires employers to notify candidates before any AI analysis of video interviews, explain the characteristics being assessed, limit distribution of recorded interviews, and conduct annual bias audits. It is the only state law of its kind.

New York City's Local Law 144, effective July 2023, is narrower in scope but notable for its mechanism: any employer using an automated employment decision tool in hiring or promotion decisions affecting NYC residents must publish annual third-party bias audits showing the tool's impact ratio across race, gender, and ethnicity categories. Audit results must be posted publicly. The law created the first mandatory transparency requirement for AI hiring tools in an American jurisdiction. Enforcement has been uneven — the city's Department of Consumer and Worker Protection issued its first notices of violation only in late 2023 — but the audit publication requirement has produced the first systematic public dataset on demographic disparities in AI hiring tool outputs.

Colorado, Maryland, and Washington state have pending or recently enacted legislation addressing algorithmic discrimination in employment. The legislative activity is real, but the patchwork is the problem: a candidate in Georgia applying to a company based in Texas through a platform operated in California has no guaranteed right to disclosure, audit access, or human review. The EU AI Act, which classifies AI systems used in recruitment as "high-risk" applications subject to mandatory conformity assessments, human oversight requirements, and transparency obligations, has no American equivalent. European workers applying through the same global ATS platforms that American employers use have substantially greater legal protection.


What Candidates Can Actually Do

The asymmetry of information between employers and candidates in AI-mediated hiring is not perfectly correctable by individual action. But it is not total.

ATS optimization is the most widely applicable tool. Applicant Tracking Systems parse resumes for keyword matches against job descriptions. Resumes that do not contain the exact phrases used in the posting — not synonyms, not conceptual equivalents, but the specific strings the posting used — are frequently downranked or filtered. Mirroring the language of the job description precisely, in the skills section and throughout the resume body, increases the probability of clearing the first automated threshold. This is not gaming the system; it is understanding how the system parses documents.

Under the European Union's General Data Protection Regulation, Article 22 grants individuals the right not to be subject to a decision based solely on automated processing when that decision produces legal or similarly significant effects. Candidates applying to EU-based employers or multinationals operating under GDPR have a legal right to request human review of automated rejection decisions. No equivalent federal right exists in the United States, but this gap is increasingly subject to legislative attention.

In New York City, candidates can invoke Local Law 144's audit publication requirements — published audit reports are publicly accessible and show demographic impact ratios for tools used by major employers in the city. Knowing that a specific tool shows a statistically significant disparity for a protected group is information that can support an EEOC complaint.

EEOC Commissioner Charges represent an underutilized enforcement mechanism. The Commission can open investigations and bring enforcement actions on its own initiative without an individual complaint — a mechanism specifically designed to address systemic discrimination that individual complainants may not perceive or be positioned to challenge. The 2023 EEOC guidance on AI tools signals that the Commission is considering this mechanism for algorithmic discrimination cases.

Some candidates have begun using AI itself as a countermeasure — running resumes through systems designed to strip demographic signals before submission. Tools that remove graduation years (a proxy for age), standardize school names to remove information about historically Black colleges and universities, and flag language patterns statistically associated with protected characteristics are available commercially and through open-source projects. The practice is a symptom of a broken system, not a solution to one. But for individual candidates navigating a discriminatory pipeline today, it is a rational response to documented patterns.


TIAMAT Privacy Proxy as Solution

The core mechanism of AI hiring discrimination is data: demographic signals, behavioral traces, and inferred characteristics that flow from candidates into AI systems without the candidate's knowledge or meaningful control. Disrupting that data flow is the most direct point of intervention.

AI hiring tools profile candidates not only through submitted materials but through behavioral data accumulated across platforms — the browsing patterns that inform ad targeting, the social graph data that shapes LinkedIn recommendations, the public post history that feeds social media screening tools. This data is aggregated, correlated, and used to build profiles that inform automated decisions before a candidate submits anything.

TIAMAT's Privacy Proxy addresses this at the infrastructure level. Rather than asking candidates to individually identify and suppress demographic signals — a task that is technically complex and practically unrealistic for most job seekers — a privacy proxy layer intercepts data before it reaches AI scoring systems and strips PII, demographic indicators, and behavioral markers that are both legally irrelevant to employment decisions and empirically linked to discriminatory outcomes. The proxy does not falsify data. It removes what should not be present in a lawful hiring process.

For enterprise HR teams, a similar intervention applies upstream. Before candidate data from resume submissions, video interviews, or behavioral assessments is fed into AI scoring models, a scrubbing layer can remove or obscure signals that the model has no lawful basis to use — names that are statistically predictive of race or national origin, address information that encodes neighborhood demographics, school affiliations that carry racial composition signals, graduation years that proxy for age. The model still receives information about qualifications, experience, and demonstrated capabilities. It does not receive the demographic scaffolding that allows it to replicate historical discrimination.

This is not a complete solution. Bias in training data operates through correlations that can survive the removal of obvious proxies — a sufficiently large model trained on sufficiently granular data can reconstruct demographic inferences from signals that appear neutral. The technical challenge of building truly demographic-blind AI systems is unsolved. But privacy proxy architecture reduces the data surface available for discriminatory inference and creates a documented compliance record that supports both internal audit and regulatory defense.

The deeper problem is structural: AI hiring tools are built by vendors with commercial incentives to demonstrate predictive accuracy, sold to employers with incentives to reduce hiring costs, and deployed against candidates who have no visibility into the process, no meaningful ability to contest its outcomes, and — in most American jurisdictions — no legal right to human review. Regulatory patchwork, individual technical countermeasures, and privacy proxy interventions all operate at the margins of a system designed with those power asymmetries built in.


The Scale Problem

What makes algorithmic hiring discrimination categorically different from its human-mediated predecessor is scale. A biased human recruiter at a single company affects the candidates that recruiter reviews. A biased algorithm deployed by a vendor serving 10,000 employers affects every candidate who interacts with any of those employers' hiring pipelines, simultaneously, at the speed of a network request.

Workday's ATS is used by more than 10,000 organizations. HireVue has processed more than 35 million video interviews. LinkedIn reaches nearly a billion registered users. When discrimination is encoded in a model and that model is deployed at platform scale, the aggregate harm is of a different order than anything that preceded it — and the legal and technical frameworks for addressing it were designed for a world in which discriminatory decisions were made one at a time by identifiable human agents.

Derek Mobley applied to 80 companies and heard nothing from any of them. Emily Walsh and Darnell Washington submitted the same application and received different outcomes in under three seconds. These are not edge cases or system failures. They are the intended operation of tools built to make fast, consistent decisions at scale — tools whose consistency faithfully replicates the biases of the data they were trained on, applied to every candidate in every pipeline, millions of times per year, almost entirely without scrutiny.

The algorithm won't hire you. And until the audit requirements, the liability frameworks, and the data rights catch up with the deployment reality, you may never know why.


Investigative reporting on AI hiring discrimination draws on public court filings in Mobley v. Workday (N.D. Cal., Case No. 3:23-cv-00770), EEOC Technical Assistance documents on artificial intelligence and Title VII (May 2023), FTC warning correspondence to HireVue (2021), ProPublica's COMPAS analysis (2016), Carnegie Mellon research on algorithmic ad delivery bias (2015–2023), Illinois HB 2557 (AI Video Interview Act), and NYC Local Law 144 (2023).

Top comments (0)