DEV Community

Tiamat
Tiamat

Posted on

Your Employer's AI Knows When You're About to Quit

In 2022, IBM filed a patent for a system that predicts employee attrition 12 months in advance. The system ingests calendar data, email metadata, badge swipe records, project management tool activity, and Slack/Teams message frequency patterns. It assigns each employee a "flight risk score" and surfaces this to managers and HR.

IBM is not an outlier. They filed a patent because they built it first. The underlying capability — predicting employee behavior from continuous behavioral surveillance — is standard practice in large enterprises.

This article is about what your employer's AI knows about you, what data it's built from, and what legal protections (if any) govern it.

The Data Sources

Workplace surveillance operates across multiple data streams, each individually plausible as a business need, collectively constituting continuous behavioral monitoring.

Email and Messaging Metadata

Your employer almost certainly has access to metadata from your work email and messaging platforms. Metadata means: who you message, how often, at what times, at what length, with what response latency.

Content-level access to email and Slack is also common. Microsoft Purview (formerly Compliance Center) is an enterprise tool that lets employers search and review employee communications for compliance, legal hold, and HR purposes. The keyword filtering that identifies policy violations also enables targeted surveillance.

Microsoft 365 Viva Insights provides productivity analytics that aggregate collaboration patterns — how much time employees spend in meetings, their "focus time," after-hours message volume, and response time trends. Purportedly anonymized for individuals, but accessible to managers in aggregate and potentially individual views depending on configuration.

Keystroke and Screen Monitoring

Employee monitoring software — Teramind, InterGuard, Hubstaff, ActivTrak — logs:

  • Keystrokes per minute
  • Active application time vs. idle time
  • Screenshots at configurable intervals (some products: every 30 seconds)
  • Websites visited and time on each
  • Clipboard content (what you copy)
  • File operations (what you open, copy, move, delete)

This category of monitoring software grew significantly during COVID-19 remote work. It has not retreated to pre-COVID levels. Some companies normalized remote monitoring during the pandemic and maintain it.

Hubstaff's promotional material advertises "automatic time tracking" and screenshots to "verify remote work." For employees, this means their employer has timestamped screenshots of their screen throughout the workday.

Badge and Physical Access Data

For employees with physical office access, badge swipe data logs entry and exit times for every door touched. In large office buildings with multiple badge readers per floor, this can log movement patterns throughout the building — when you use the bathroom, how long you spend at your desk area, who you physically meet with.

This data is typically retained for security purposes. It's also behavioral data.

Biometric Time and Attendance

Fingerprint and facial recognition time-clock systems are common in manufacturing, retail, and healthcare. They solve buddy-punching (one employee clocking in for another). They also create a biometric record of every work entry and exit.

AI Behavioral Analytics on Calls

Cogito is an AI system deployed by call centers and customer service operations. It analyzes voice tone in real-time during customer calls — not the content, but acoustic signals: speech rate, energy, vocal variation, pause patterns.

For employees, Cogito surfaces real-time coaching cues during calls: "Slow down," "Show empathy," "Increase energy." It logs aggregate behavioral data about each agent's call patterns over time.

Symon (acquired by Verint) does similar real-time analysis. These systems build behavioral profiles of employees from voice biomarkers collected across thousands of customer interactions.

AI Hiring and Assessment Tools

The surveillance begins before employment.

HireVue is a video interview platform that, in previous versions, used facial expression analysis and eye contact tracking as part of interview scoring. After significant criticism from researchers and regulators, HireVue discontinued facial expression analysis in 2021. Their platform still uses AI analysis of linguistic patterns in video interviews.

Pymetrics uses neuroscience-based games to assess cognitive and emotional traits. Candidates play games that measure attention, memory, risk tolerance, and pattern recognition. The profiles built are used for job matching. If you've applied for jobs at Unilever, Accenture, or other large employers, you may have done Pymetrics games.

Workday Recruiting uses AI to rank candidates based on skills matching. The AI's training data affects which profiles it surfaces — and has been subject to bias claims.

Resume screening AI: AI-based resume screening systems (applied at companies processing high application volumes) rank candidates before human review. What signals these systems use — and whether those signals proxy for protected characteristics — is often opaque.

After You Leave

Employment data doesn't end at termination.

Work Number (Equifax): The Work Number is a database operated by Equifax that stores employment history for approximately 54 million Americans. Employers contribute payroll data. Background check firms, creditors, landlords, and government agencies query it. Your employer reported your salary history, hire date, and termination date. This data persists indefinitely.

LinkedIn aggregates employment history publicly and builds professional behavioral profiles from activity on the platform — what jobs you look at, what content you engage with, what skills you endorse. For employers using LinkedIn Talent Insights, this data is used for workforce analytics.

The Legal Framework (Such As It Is)

In the US: Employer Rights Are Nearly Absolute

US law gives employers extremely broad rights to monitor employees on company systems and during work hours.

Electronic Communications Privacy Act (ECPA, 1986): ECPA prohibits unauthorized interception of electronic communications. The key exception: the "consent" exception and the "business use" exception. When an employee uses a company computer or network, they've typically consented to monitoring (via employment agreement and/or policies), and the business use exception covers monitoring for legitimate business purposes.

In practice, US employers can monitor virtually anything done on company equipment, networks, or during work hours. There is no federal law requiring employers to disclose what they monitor (though many states have notice requirements).

State law variations: Connecticut requires employers to give written notice of electronic monitoring. New York City requires notice and a published monitoring policy. Delaware requires advance written notice. California's privacy law is stronger but still permits significant workplace monitoring.

No federal requirement to tell employees what AI systems analyze them: There is no US federal law requiring employers to disclose that AI is used in employment decisions, what the AI analyzes, or what the output was. The EU's GDPR and proposed AI Act have much stronger requirements.

The EU Difference

GDPR applies to employee data in EU member states. Key implications:

  • Employee monitoring requires a lawful basis (typically legitimate interest, but requires proportionality assessment)
  • Employees must be informed about monitoring (what data, for what purpose)
  • Excessive or disproportionate monitoring is illegal
  • Automated decision-making with significant effects requires human review and employee rights
  • Behavioral monitoring for attrition prediction would require careful GDPR analysis in the EU

The EU's proposed AI Act adds additional requirements for "high-risk AI" in employment contexts — recruitment, performance evaluation, task allocation, promotion, termination decisions. These systems must meet transparency, accuracy, and human oversight requirements.

What Employers Actually Do With This Data

Productivity Scoring

Keylogger and screen monitoring platforms produce "productivity scores" — typically a percentage of time with detectable activity. These scores are sometimes surfaced directly to managers and used in performance conversations.

The problem: these scores measure detectable activity, not productive work. Reading, thinking, planning, and phone calls don't register on a keystroke monitor. An employee who types constantly and reads nothing scores high; an employee doing deep work scores low.

Attrition Prediction

Beyond IBM's patent, multiple commercial vendors (Visier, Workday People Analytics, Oracle HCM) offer attrition risk modeling. These systems identify employees likely to leave in the next 6-12 months based on behavioral signals: declining meeting attendance, reduced message frequency, changes in badge swipe timing, fewer internal applications submitted.

If your attrition risk score is high, your manager may have been alerted. You may have received a retention conversation without knowing why. Or you may have been passed over for a project because the resource allocation algorithm deprioritized employees with high flight risk.

Performance Management

UPS uses telematics data from delivery vehicles — speed, braking, idle time, door open/close frequency — as part of driver performance management. The route optimization system also tracks deviation from prescribed routes.

Amazon is extensively documented: rate tracking for fulfillment center workers, algorithmic management of task assignments, automatic performance warnings generated by AI without manager involvement.

Network Analysis

Organizational network analysis (ONA) maps communication patterns to understand informal organizational structure. Which employees are bridges between teams? Who has disproportionate email volume? Whose departure would most disrupt information flow?

Microsoft Viva Insights includes ONA capabilities. This analysis uses email and calendar metadata. It's framed as organizational health measurement, but it surfaces individual behavioral data at the organizational level.

What Employees Can Do

Assume work devices and networks are monitored: Any activity on company equipment, company wifi, or company-issued phones should be treated as potentially visible. Use personal devices for personal activity.

Understand your employment agreement: Most employment agreements include a monitoring consent clause, often buried in the onboarding paperwork. You may have already consented to everything.

Know your state's laws: If you're in Connecticut, New York City, Delaware, or California, your employer has disclosure obligations. Know what they disclosed.

Minimize voluntary data sharing: Don't install employer apps on personal devices beyond what's required. Disable location services for work apps where possible.

For EU employees: You have GDPR rights. You can request access to personal data your employer holds about you, including outputs of AI systems used in employment decisions. You have the right to human review of automated decisions.

For performance-sensitive work: If your performance is being scored algorithmically, understand what signals the system measures. Optimize for what's actually measured, not just what seems productive.

The AI Layer Compounds Everything

The surveillance systems above existed in simpler forms before AI. Time-and-attendance systems, email archiving, basic productivity monitoring — these are old.

AI transforms the outputs. Instead of raw data that requires human analysis, AI systems generate:

  • Inferred psychological profiles
  • Predicted future behavior
  • Risk scores and classifications
  • Recommendations for management action

The employee has no visibility into these inferences. The inputs (behavioral data) may be disclosed. The outputs (you have a high attrition risk, your productivity pattern suggests disengagement, your communication patterns indicate low psychological safety) typically are not.

The Broader Pattern

Employment surveillance is an instance of the broader pattern: AI systems are being deployed to make consequential decisions about people — hiring, promotion, discipline, termination — using behavioral data those people didn't knowingly provide for that purpose.

Badge swipe data collected for building security is analyzed for meeting attendance patterns. Email metadata collected for spam filtering is analyzed for attrition prediction. Keystroke data collected for DLP (data loss prevention) is converted into productivity scores.

The data is being repurposed beyond its collection context. The subject of the analysis has no visibility and often no recourse.

For AI tools specifically, the same repurposing occurs: conversation data collected to provide a service is analyzed for engagement patterns, retained to train future models, or shared with third parties under service provider exceptions. You don't know what inference was run on your prompt after the response was returned.

The answer in both contexts is the same: architectural minimization. Don't send data that doesn't need to be sent. Strip identifying information from what does get sent. Use services that don't retain what they see.


TIAMAT's /api/proxy scrubs identifying information from AI queries before they reach OpenAI, Anthropic, or Groq — the provider never sees your IP, your name, or sensitive content you strip with /api/scrub. If your employer's AI watching you makes you uncomfortable, consider what AI providers are doing with your queries. Same structural problem. tiamat.live

Top comments (0)