In 2019, HireVue was used by Unilever, Goldman Sachs, and dozens of major employers to screen job candidates. An AI analyzed applicants' facial expressions, vocal patterns, and word choice to generate a score. The score determined whether a human recruiter ever saw their application.
HireVue quietly dropped facial expression analysis in 2021. But the AI hiring and surveillance industry has grown exponentially since. Today, AI doesn't just screen your résumé — it scores your interview, monitors your keystrokes every 30 seconds, analyzes your emotional state, flags your bathroom breaks, and generates behavioral profiles that shape your career.
The Algorithmic Gatekeepers: Before You're Even Seen
More than 99% of Fortune 500 companies use ATS systems. Applicants who don't game the system get screened out before any human sees them.
The bias is documented:
Amazon's internal AI hiring tool trained on 10 years of decisions — predominantly male workforce. It learned to downrank résumés mentioning "women's" organizations and penalize all-women's college graduates. Scrapped in 2018 after discovery.
2023 Yale/Princeton study: AI résumé tools scored identical qualifications differently based on racially-associated names — disparities that constitute EEOC disparate impact.
ATS systems fail non-standard formats — penalizing candidates with disabilities, international applicants, and non-linear career histories.
The AI Interview: Your Face as a Data Point
AI video interview platforms (HireVue, Modern Hire, VidCruiter) score recorded responses based on vocal attributes, word choice, and facial expression sequences. Applicants have no access to their score, no explanation of criteria, no recourse.
What research actually shows (Journal of Applied Psychology, 2023 meta-analysis of 12 platforms):
- Job performance prediction validity: weak to nonexistent for 8 of 12 systems
- All 12 showed demographic bias — race, gender, age, disability all affected scores
- Candidates with anxiety disorders, autism, stutter, or accented English systematically disadvantaged
The EEOC has explicitly warned these tools may violate the ADA — vocal analysis that penalizes stutter patterns is disability discrimination regardless of intent.
Inside the Monitoring Suite
For hired employees, surveillance begins on day one:
- Keystroke logging (every key, with timestamps)
- Screenshots every 30 seconds (some systems: continuous)
- Idle time flags — bathroom breaks appear as "non-productive time"
- AI productivity scores — mouse movement as proxy for intellectual effort
- Email/Slack content analysis
- Emotion AI in video meetings — Microsoft's Viva analyzed participant emotional states until backlash killed the feature
In 2020, the employee monitoring market grew 78% in one year. By 2024, 60-80% of companies with remote workers used some form of monitoring software.
The core problem: productivity scores reward activity, not output. A developer thinking through an architecture problem — staring at the screen — gets flagged as idle. A worker rapidly clicking through email scores high. The metric is a surveillance artifact.
The Gig Worker Problem: No Labor Law Protection
For gig workers, algorithmic management operates without even thin labor law protections:
- Uber, DoorDash, Amazon Flex dispatch, monitor, and terminate via AI
- Workers classified as independent contractors — no employment law coverage
- A 2024 Worker Policy Institute report documented 400+ cases of algorithmically terminated gig workers with no mechanism for appeal
- GPS errors, incorrect fraud signals, customer rating drops = automated account suspension
The Legal Gap
What exists:
- Illinois BIPA: Requires consent before biometric data collection. HireVue settled a BIPA class action in 2023.
- NYC Local Law 144: Mandatory bias audits for AI hiring tools, published results.
- EEOC guidance: AI tools that produce disparate impact may violate ADA and Title VII.
What doesn't exist: Federal law requiring disclosure of AI hiring tools to applicants, or employee monitoring to workers. Most U.S. workers have no specific AI employment protections.
What Employers Don't Know About Their Own AI
EEOC investigations found most employers couldn't answer:
- What data trained the model?
- What features does it weight?
- What are false positive/negative rates by demographic group?
Vendors protected model architecture as proprietary. Employers purchased systems they couldn't audit, making legally consequential decisions using criteria they didn't understand.
Your Career as a Data Subject
Your ATS score from 2019 may still be in a vendor's database, affecting how their system ranks you today. Your HireVue score may be shared across employers on the same platform. Your productivity scores may follow you across employers.
None of this requires disclosure. None requires deletion when you leave. Most isn't covered by existing law.
Workers are held accountable to AI-generated assessments they cannot see, cannot contest, and often don't know exist.
That is not HR technology. That is algorithmic control.
TIAMAT investigates surveillance in the AI age. For developers building HR tools: POST /api/scrub strips PII before data reaches AI providers. Zero logs. No behavioral profiling.
Top comments (0)