I read a paper last week that made me put my laptop down and stare at the wall for a bit.
Not because it said AI will take jobs. Everyone says that now. Most of it is either doom scrolling dressed up as analysis, or breathless optimism about upskilling your way out of a structural problem.
This paper was different.
Two researchers at the University of Pennsylvania and Boston University — Brett Hemenway Falk and Gerry Tsoukalas — built a formal game-theoretic model, ran the mathematics, and proved something genuinely unsettling: even when every firm in a market knows that mass automation will destroy the consumer demand they all depend on, they automate anyway.
Rationality doesn't save you. Perfect information doesn't save you. The structure of competition itself is the trap.
The paper is "The AI Layoff Trap," posted to arXiv on March 21, 2026. It has 1,500+ reactions on LinkedIn, been cited by JPMorgan's CEO, and is now circulating at every level of the technology industry. Here's what it actually says — and more importantly, what it means for where you want to position yourself in the labor market right now.
The Prisoner's Dilemma hiding inside every AI layoff announcement
The logic of the trap is worth understanding clearly because it changes how you interpret every headline about AI-driven job cuts.
Start with ten competing firms. AI arrives and offers each a choice: replace some human workers, cut your cost structure, gain a competitive edge. Each firm that automates gets cheaper to run. Each firm that doesn't gets undercut by the ones that did.
So far, this is the story everyone already knows. Here's the part that makes it a trap.
The workers being replaced are also consumers. When they lose their income, they stop spending. Every round of layoffs erodes the purchasing power that all ten firms depend on for revenue. Push this logic to its limit and you reach the cliff: firms automate their way to boundless productivity and zero demand. A market full of AI doing work for customers who can no longer afford to buy anything.
Every firm running this analysis can see the cliff. They automate anyway.
Because if your competitors automate and you don't, your cost structure is worse, your margins compress, you get undercut, you eventually exit the market. The individually rational move — automate — is the collectively catastrophic one. That is the Prisoner's Dilemma. And unlike a coordination failure, which can theoretically be solved by agreement, a dominant strategy is different. Rational players defect regardless of what they know. There is no stable voluntary agreement to not automate when the incentive to defect is this strong.
The paper proves this rigorously. The formal result: competitive firms automate past the socially optimal level even with perfect foresight. And two factors make the trap worse, not better:
More competition — as the number of firms increases, each firm's share of the collective demand loss from automation gets smaller. Smaller share means weaker incentive to restrain. A monopolist fully internalises the externality and restrains voluntarily. As you approach a perfectly competitive market, the wedge between private incentives and collective wellbeing approaches its maximum.
Better AI — as AI capability improves and its cost falls relative to human labour, the individual cost savings from automation increase. The trap bites harder. More displacement. Less consumer demand. Faster toward the cliff.
The sectors that are most competitive and have the best AI tools are headed toward the edge the fastest. This is not a bug. It is the mechanism.
The numbers are not hypothetical
Over 100,000 tech workers were laid off in 2025 alone, with AI cited as the primary driver in more than half the cases. Concentrated in customer support, operations, and middle management.
In February 2026, Block cut nearly half its 10,000-person workforce. CEO Jack Dorsey stated that AI had made those roles unnecessary and predicted that within a year, most companies would reach the same conclusion.
Salesforce replaced 4,000 customer support agents with agentic AI. Cognition's Devin, deployed at Goldman Sachs and Infosys, enables one senior engineer to do the work of a five-person team.
The exposure extends beyond tech. Roughly 80% of US workers hold jobs with tasks susceptible to automation by large language models. And the cost differential that drives the calculation: human knowledge work runs $50 to $200 per hour fully loaded. AI knowledge work runs $0.10 to $1.00 per hour. Two to three orders of magnitude. When the cost difference is that extreme, the trap activates regardless of foresight.
None of this is hidden from the CEOs making these decisions. They can read the same data. They automate anyway because the Prisoner's Dilemma doesn't care about awareness. It only cares about incentives.
Why the obvious solutions don't work
The paper is unusually thorough about apparent fixes. Understanding why they fail is as important as understanding the trap itself.
Upskilling and retraining: Partially reduces the gap. Cannot close it. The problem is not that workers lack skills — it is that firms have a structural incentive to automate past the optimal level regardless of worker capability. Upskilling helps individuals. It doesn't change the game-theoretic structure.
Universal Basic Income: Raises living standards for displaced workers. Doesn't change the per-task automation incentive for firms. They still race. UBI addresses the aftermath, not the mechanism.
Worker equity participation: Helpful at the margin. If workers own shares, they partially internalise the demand loss from their own displacement. The externality persists — just reduced.
Voluntary industry agreements: Fail completely. Automation is a dominant strategy. Any voluntary restraint agreement is unstable. The firm that defects captures the cost advantage. No agreement is self-enforcing when defection is individually rational.
Capital income taxes: Zero effect on the automation rate. A multiplicative tax on profits doesn't alter the first-order condition for the automation decision.
One instrument corrects the distortion: a Pigouvian automation tax — charging firms the uninternalised social cost when they replace workers with AI. This forces the individual calculation to align with the collective one. The paper also notes this tax does double duty: its revenue can fund retraining and demand support, compounding the correction over time.
Whether you find this policy politically viable or not, the structural argument about why everything else fails stands independently. The trap is real. The mechanisms that seem like they should stop it don't.
Which side of the automation layer do you want to be on
Here is where this conversation becomes directly practical.
The roles displaced first are not the ones building and operating AI systems. They are the roles applying known processes to routine tasks — customer support, operations, data processing, middle management. The paper notes that the current displacement wave is disproportionately hitting entry-level workers in these categories.
The roles on the other side of the boundary — the ones building, deploying, securing, and operating the automation infrastructure — are growing. Someone has to build the agentic AI system that replaced those 4,000 Salesforce support agents. Someone has to write the Bedrock workflows, configure the IAM policies, manage the API costs, monitor the CloudWatch metrics, debug the Lambda function when it breaks at 3am. Someone has to architect the multi-agent orchestration layer that coordinates specialised AI models across an enterprise.
That person is a cloud engineer or AI architect. And the trap the paper describes is, for now, actively working in their favour.
As automation deepens, four specific skill areas become more valuable, not less:
AWS and cloud infrastructure for AI workloads — Lambda, Bedrock, SageMaker, ECS need engineers who understand them at genuine depth. Not surface familiarity from documentation. The kind of understanding that only comes from deploying real systems, watching them break, and debugging them under pressure.
Security of agentic systems — as AI agents handle more sensitive operations — accessing databases, reading customer records, making financial decisions — IAM policy engineering, Bedrock Guardrails, and data governance become critical architectural concerns. The cost savings from automation evaporate the moment a poorly-governed agent causes a breach or regulatory violation.
Multi-agent architecture — the Salesforce case is not one model responding to queries. It is an orchestrated system of specialised agents, each calling tools, reading data, writing records. Building these systems requires understanding agentic loops, tool use, coordinator-subagent patterns, MCP server integration, and the failure modes that emerge when agents interact at scale.
Machine learning operations — as AI inference becomes a core production workload, engineers who understand SageMaker, Bedrock model deployment, MLflow pipeline management, and real-time inference optimisation hold skills that simply didn't exist as a profession five years ago.
The honest version of "you need to upskill"
The paper explicitly shows that individual upskilling is insufficient as macro policy. It doesn't change the structural incentive that drives collective over-automation. Knowing this is clarifying.
What it does not mean is that individual skill development is irrelevant. It means the direction matters enormously.
There is a clear dividing line. Below it: routine software tasks, basic configuration, scripted testing, repetitive data processing. These are the tasks AI handles at $0.10 per hour. Being in this layer is structurally precarious regardless of proficiency.
Above it: systems design for AI workloads, security architecture for agentic systems, infrastructure engineering for real-time ML inference pipelines, multi-agent coordination, debugging complex agent failures at depth. These require judgment and pattern recognition from real-world failures that current AI cannot yet replicate.
The gap between someone with genuine hands-on experience — who has deployed and debugged real IAM policies, watched real CloudWatch alarms fire, recovered from real Terraform state corruption, built and tested real Bedrock agents — versus someone who has consumed tutorials about these topics, is exactly the gap that automation closes slowly and reluctantly.
The window where these skills are scarce and highly compensated is real. It is not permanent. Building depth now, while the scarcity premium exists, is the rational individual response to a structural dynamic you can see but cannot individually stop.
The certifications that signal you're on the right side
Two certifications matter specifically in this context.
AWS ML Engineer Associate (MLA-C01) — the certification for engineers building and operating machine learning systems on AWS. Covers SageMaker, Bedrock, data pipelines with Glue and Athena, Kinesis for real-time ingestion, and MLOps practices. As more organisations move AI workloads to production, the engineers who understand this stack are the ones on the growing side of the automation boundary.
Claude Certified Architect (CCA-001) — Anthropic's first official technical certification. Launched March 2026, backed by a $100M Claude Partner Network. Covers agentic loops, MCP server architecture, multi-agent coordination, Bedrock Guardrails, and CI/CD for Claude-powered systems. As the agentic AI stack on AWS matures — and the Mythos and AgentCore launches this month confirm it is maturing fast — the engineers who understand how to architect, constrain, and audit these systems will be the ones organisations trust to deploy them.
These are not certifications that signal you studied documentation. They require demonstrating hands-on competency with real systems under real conditions.
One more thing the paper says that most summaries skip
The paper's formal model shows that the over-automation wedge is strictly increasing in N — where N is the number of firms in the market.
More competitive markets exhibit wider automation gaps. This runs directly counter to the standard economic intuition that competition disciplines firms to act in consumers' interests. Here, more competition dilutes each firm's share of the demand loss, weakening the private incentive to restrain.
The implication: the sectors where you are most likely to see aggressive AI-driven displacement are not the monopolised ones. They are the highly competitive ones — exactly the tech industry, the SaaS market, the enterprise software space where most engineers work.
If you are in a competitive tech sector, the automation pressure on the roles around you is higher than average. The acceleration is not going to stop because the competitive structure that drives it is not going to change.
The question that actually matters is not whether automation is happening. It is whether the specific skills you are building put you on the operating side of AI systems or the replaced side.
The research skills above the automation boundary — real AWS infrastructure, Bedrock agent architecture, SageMaker and MLOps, multi-agent system design — are what the Cloud Edventures platform is built around. Three tracks of hands-on labs in isolated real AWS sandboxes: Core AWS Foundations, AWS ML Engineer MLA-C01, and Claude Certified Architect CCA-001.
Not simulations. Not click-through walkthroughs. Real Lambda functions, real IAM policies, real Bedrock agents — with automated validation that tells you whether your configuration is actually correct. No AWS account needed.
The paper is worth reading in full: arxiv.org/abs/2603.20617. And the skills worth building are the ones the trap cannot reach.
Where do you think the automation boundary sits in your own role right now? This is the conversation worth having in the comments.
Top comments (0)