On April 23, Meta told 8,000 employees they would be walking out the door on May 20. The next morning, Microsoft announced the first voluntary retirement program in its 51-year history — up to 8,750 people eligible. Six weeks earlier, Block gutted 40% of its workforce, citing its own internal AI agent. The headlines write themselves: "AI is eating the tech industry," "the labor crisis is here."
📖 Read the full version with charts and embedded sources on ComputeLeap →
That's the story everyone is telling. It's also the wrong one.
The real story is happening inside Meta's own walls. The same week the company said goodbye to 8,000 people, it quietly began installing surveillance software on the computers of the employees it's keeping — a program called the Model Capability Initiative (MCI) that records every keystroke, mouse movement, and periodic screenshot across Gmail, GitHub, Slack, and hundreds of other sites. The stated purpose: to train AI agents that can automate white-collar work. The logical endpoint: the employees generating the training data are, line by line, encoding the workflows that will replace the next cohort. That isn't a productivity tool. That is the first visible instance of enterprise-scale AI observability applied to employees — and because Meta just normalized it, every Fortune 500 will pilot a version within 18 months.
This piece is about the layoffs. But the layoffs are the distraction. The surveillance is the precedent.
The Layoff Wave — the Surface Story
The numbers are real, and they are large. Meta's chief people officer Janelle Gale sent a Thursday memo informing staff the company would cut 10% of its workforce — roughly 8,000 people — and decline to fill another 6,000 open roles. "This is not an easy tradeoff," she wrote, "and it will mean letting go of people who have made meaningful contributions to Meta during their time here." Bloomberg was first to report; CNBC, Reuters, and the FT followed within hours. The cuts land May 20.
Meta's 2026 capital expenditure guidance — $115 billion to $135 billion, roughly double the $72 billion spent in 2025 — explains the accounting. CEO Mark Zuckerberg has told investors the company will spend whatever it takes to build "personal superintelligence." Paying for that at current margins means shedding people. The company's framing is that the capex is the investment and the layoffs are the offset. What's hidden behind this number is a structural bet: that the people being cut now will be replaced by software the remaining employees are being paid to help train.
Microsoft, on the same day, introduced something unprecedented in the company's history. Rather than traditional layoffs, Satya Nadella's team offered a one-time voluntary retirement program to about 7% of U.S. employees — senior director level and below, whose years of employment plus age sum to at least 70. That's roughly 8,750 people eligible to walk with severance and extended healthcare. The financial engineering is elegant: the people most expensive to fire voluntarily raise their hands. TechCrunch and GeekWire both flagged this as a first for the 51-year-old company. What matters isn't that Microsoft is unusually generous — it's that the framing of "retirement" sidesteps the WARN Act, muddles the layoffs.fyi counter, and gives the company political cover on capex earnings calls. This is layoff optimization, not compassion.
Block is the canary. In March, Jack Dorsey announced the company behind Square and Cash App would cut 4,000 people — roughly 40% of its workforce — and pointed directly at the company's internal AI agent, codename goose, as the reason. Goose had been in production internally for about 18 months; Fortune's exclusive with Block's CFO detailed the leverage math. Then came the complication: within six weeks, as we documented in our analysis of Block's 40% layoff and its codename goose agent, technical leads began threatening to quit unless laid-off teammates were rehired. HumAI's reporting shows Block has quietly rehired engineers — often at lower seniority and tighter comp. The lesson the rest of big tech is taking: cut fast, claim AI, rehire the critical third at a 30% discount.
Zoom out and the macro number is staggering. Per layoffs.fyi tallies reported by CNBC, more than 92,000 tech workers have been laid off in 2026 alone, bringing the running total since 2020 to nearly 900,000. Amazon announced its widest layoff in company history earlier this quarter. Oracle, Snap, Disney — the list is 96,000 and climbing. Glassdoor's Employee Confidence Index shows the tech sector dropped 6.8 percentage points year-over-year in March, the largest drop in any industry. But this is the part everyone is already covering. Let's move to the part they aren't.
On the HN thread for the Bloomberg story (781 points, 829 comments), the top comment from user bandrami captured the actual dynamic:
💡 "This is interesting because it's a case of 'AI taking jobs' but not in the way people normally mean; these massive layoffs are happening not because AI is doing the work they used to do but because capex is sucking all of the operating money out of everywhere." — bandrami, HN
Hold that frame. Because the next section explains where the capex goes — and who pays for it with their keystrokes.
The Story Underneath — Meta's MCI
Two days before Meta announced the 10% cut, Fortune broke a different story: Meta is installing tracking software on every U.S. employee's work computer. The program is called the Model Capability Initiative (MCI), and it does three things. It records mouse movements and clicks. It logs keystrokes. And it periodically captures screenshots — all inside a set of "work apps and websites" that CNBC's reporting reveals includes Google, LinkedIn, Wikipedia, Microsoft's GitHub, Salesforce's Slack, Atlassian's Jira and Confluence, Meta's own Threads and Manus, Gmail, Visual Studio Code, and an internal tool called Metamate. Hundreds of sites in total.
The stated purpose is not productivity management. It is training data. A Meta spokesperson explained the logic in clean language to Fortune: "If we're building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them." CTO Andrew Bosworth went further in an internal memo reported by The Register: Meta envisions "a world where our agents primarily do the work and our role is to direct, review and help them improve." Read carefully. The people being monitored are the raw material for the agents that will reduce the need for their successors. One employee, anonymous to the BBC, used the word that has dominated every discussion of MCI since: "very dystopian." Computing.co.uk used the quote as their headline. Futurism summarized the company's position more bluntly: "Meta is saying the quiet part out loud."
The timing is the part nobody in Meta's PR shop can spin. Fortune's story landed on April 21. CNBC's followed April 22. The 10% layoff memo landed April 23. Two days apart. TechRadar Pro ran the causal headline explicitly: "Meta is logging employees' keystrokes and screenshots to train AI agents — weeks before major layoffs." Under U.S. federal law, there is no opt-out. Workers at Meta's U.S. offices have no legal right to refuse the MCI agent on their machines. Tell someone they have six weeks until the WARN notice and then ask them to hand over their keystroke data with no opt-out — and then call it consent — and you have defined the outer edge of what "at will" means in 2026. AI Supremacy's newsletter, which broke the broader narrative into public consciousness, put it tersely: "Meta not just spying with AI glasses, now data harvesting talented staff. No opt-out."
The 18-Month Precedent
Every enterprise-software procurement cycle I've watched over 20 years follows the same pattern. A FAANG normalizes a practice. A tier-two SaaS company builds a commercial version of it. The Fortune 500 starts piloting within 12 months. It becomes a standard line item inside 24.
Meta just normalized AI-native employee observability at the scale of 75,000 users. The SaaS category that will emerge around this is already taking shape: products that record a granular stream of employee screen, keyboard, and application telemetry; pipe it through an LLM scoring layer for "workflow classification"; and feed it back into either a coaching agent or a direct-automation agent. Every CIO has received a pitch from at least one of these vendors in the last ninety days. Meta just gave all of them the reference customer they needed.
The logic is not secret — it's what the AI-native org playbook Dorsey has been pitching has been missing. Dorsey's Block built goose and then cut 40% of staff. But Block didn't productionize employee telemetry collection to train the next version of goose. Meta is. That is the leap. The companies downstream of Meta will not write research papers or send internal memos — they will simply deploy the product, usually rolled in under an existing "endpoint security" or "DLP" SKU where most employees won't notice until it's in the HR handbook.
Here is the prediction: by Q4 2027, three to five of the top ten U.S. private employers will be running some version of MCI under a brand name sold by a Menlo Park-funded SaaS vendor. Disclosure will vary. Consent will be buried in a revised acceptable-use policy. The 18-month timeline is not a guess — it's the standard procurement gap between "FAANG reference customer" and "regulated enterprise rollout."
Contrarian Corner — Steelman First
⚠️ The strongest counter-argument to the thesis above: workplace monitoring is not new. Pinkerton detectives watched factory floors in 1892. Keystroke loggers have been commercial software since the 1990s. Every enterprise already collects endpoint telemetry for security and DLP. Employees consented when they signed the handbook. What Meta is doing is a UX improvement, not a category break. And anyway, as Sam Altman pointed out in February, companies are "AI washing" layoffs they'd have done anyway — so blaming the surveillance for the layoffs, or even the other way around, is narrative fiction. The real driver is capex reallocation.
The rebuttal: the steelman is right about Pinkertons and keyloggers, and it's right that AI washing is real — but it's wrong about scale and purpose. Security telemetry is collected to detect malicious activity (data exfiltration, credential misuse). Performance management telemetry is collected to measure output (tickets closed, calls handled). MCI is different. MCI collects the process — the specific sequence of clicks and keystrokes a senior engineer uses to structure a code review, the phrasing a PM uses in a Slack thread, the order in which a designer opens Figma panels. That's not security. That's not performance. That's an apprenticeship in bulk, extracted from people who were not told what the apprentice would be. Altman's AI-washing caveat applies to the layoff narrative — and we'll take it. It does not apply to the MCI narrative. The monitoring isn't about the layoffs. The monitoring is about what comes after them.
The Ethical Question — And It Is One
It would be lazy to call this dystopian and stop there. The harder question: where is the line?
There is a real distinction — one Yale law professor Ifeoma Ajunwa has been writing about for a decade — between monitoring that enforces a contract (you agreed to do X hours of work; we verify X hours happened) and monitoring that extracts value beyond the contract (we watch how you work and build an asset, owned entirely by us, that captures the transferable skill you spent a career developing). The first is controversial but defensible. The second has no settled legal or ethical framework in the U.S. — and because most U.S. states are at-will, no practical avenue for refusal.
European workers have more ground. GDPR Article 88 gives member states authority to pass employment-specific data protection laws; most have. France's CNIL has already ruled that keystroke-level monitoring without a documented, proportionate business case violates the GDPR's "data minimization" principle. Germany's works councils can veto the deployment of tracking software outright. Meta's MCI would not, in its current U.S. form, pass a German BetrVG review. The company has been pointedly quiet about whether it will extend MCI outside the U.S., and the regulatory asymmetry is the reason.
Inside the U.S., the landscape is a patchwork. IAPP's summary lays it out: California's CCPA, as of January 1, 2026, requires employers to conduct risk assessments for processing personal-email content over company systems and for any automated processing used to infer job performance. New York requires written notice of electronic monitoring at hiring, posted in a conspicuous place. Illinois's BIPA requires informed written consent and strict data-handling for biometric data. Connecticut and Delaware have their own notice regimes. Fifteen more states have biometric legislation in committee. The federal backstop — the Electronic Communications Privacy Act and Stored Communications Act — permits monitoring "for legitimate business purposes," which has never been tested against the specific question of "training AI to replace the monitored worker." Someone will file that suit in 2026. If Meta is the defendant, the discovery alone will be brutal.
The social temperature is already shifting. A r/technology thread asking whether Palantir employees are starting to wonder if they're the bad guys hit 22,780 upvotes and 1,136 comments in a day. 30,000 Samsung union members took to the streets this week demanding a share of AI-driven profits. The broader AI backlash that's already visible in public sentiment — and in specific acts of sabotage — will not spare a company that is simultaneously laying off 8,000 people and installing keystroke trackers on the survivors. The PR surface is maximum.
What Employees Should Do — Right Now, This Week
This is the practitioner section. None of what follows requires a lawyer, a union rep, or a grievance. Every single item is something a salaried tech worker can do this week with fifteen minutes and a personal laptop.
Assume your work device is instrumented. Not just at Meta — at every tier-one tech employer within 18 months. Do not conduct job searches, update LinkedIn, or compose resume materials on a work machine. Do not route personal email through a work browser profile. Do not use work Slack or Teams for anything sensitive. If you're unsure whether endpoint monitoring is installed, check your company's acceptable-use policy and endpoint security agent list — you don't need IT's permission to read HR's own documents.
Know your jurisdiction. Californians under CCPA have data-subject rights including access and deletion requests — use them. New Yorkers can demand the monitoring disclosure that state law requires was given at hiring. Illinois employees with any biometric capture (many keystroke loggers qualify) are protected by BIPA and can sue individually. If you're in the EU under GDPR, your employer is already on weaker legal ground than they realize. Look up your state attorney general's consumer protection page this week. Bookmark it.
Document performance on personal storage. Copies of performance reviews, 1:1 notes, project outcomes, praise emails, and compensation history — kept in a personal cloud account you still control after a termination. Not a work machine. Not a work-synced OneDrive. If you are fired and want to contest it or negotiate severance, you will need evidence your employer no longer grants you access to.
Talk to a lawyer before you need one. Most employment-law firms offer free 30-minute consults. Use one. Ask three questions: (a) what does my employment contract allow around monitoring and post-termination data collection, (b) does my state have any notice or consent laws that apply, (c) if I negotiate a severance, what is a typical multiplier in my jurisdiction and role. You are not hiring a lawyer. You are getting a calibration.
Use LinkedIn and Blind with discipline. Never post job-search activity under a handle linked to your work email. Never cross-post on Blind from a device on the corporate network — Blind's "verified employer" check doesn't mean Blind itself is safe from discovery in litigation. If you are contemplating a move, set up a personal-email Blind account on a personal device today.
Know your collective-action options. Most U.S. tech workers are non-union, but the NLRB protects concerted activity even without a union. Two or more employees raising monitoring concerns in writing to HR is protected. If the topic feels too hot for email, CODE-CWA and the TechWorkersCoalition both run confidential channels.
None of this is paranoia. It is hygiene. Your great-grandparents knew not to discuss wages or organizing plans in the company town's general store. The same discipline applies when the general store now runs on the laptop in your bag.
What Employers Should Do
Shorter version. Four items. If you are building or approving a monitoring program, answer each in writing before you deploy:
Transparency. Tell employees what is collected, how it is processed, who has access, how long it is retained, and what it will be used for. Not in an updated AUP. In an email, plus a town hall, plus a written Q&A that is revised based on the questions asked.
Consent with a real opt-out. If there is no opt-out, it is not consent. If the only opt-out is resignation, it is not consent. Build a workflow that allows employees to exclude specific apps or time windows from collection, with no retaliation.
Narrow scope and retention. Collect the minimum required for the stated purpose. Delete the rest on a 90-day rolling window. Publish the retention schedule.
Independent audit. Annual third-party review of what was collected, how it was used, and whether the stated purpose and the actual purpose match. Publish the audit summary internally.
Companies that do these four things will retain talent. Companies that don't will face union drives, class actions, and a steady bleed of senior engineers to competitors inside 24 months. The math is not hard.
What Comes Next
Eighteen months is the window. By late 2027 we will know whether enterprise AI employee-observability splits into two markets — a disclosure-first one sold to companies that care about retention, and a dark-patterns one sold to companies that will be defendants in the 2028 class-action docket. The companies that land on the right side of that split will not be the ones with the most advanced surveillance stack. They will be the ones that wrote the consent architecture first.
The Meta layoffs are the headline. They're a footnote. The headline is what Meta is doing to the 72,000 employees it didn't lay off this week. They're being asked to train the thing that replaces the next 8,000 people. And because of where Meta sits in the procurement food chain, every CIO at every mid-cap company in America just saw the playbook. They will run it, with minor modifications, and with far less press attention.
If you're a tech employee, the next year is not about whether your company survives the AI wave. It's about whether you can tell the difference between a performance review and a training run. Assume you're being watched. Make it worth their while.
Originally published at ComputeLeap






Top comments (0)