DEV Community

HumanPages.ai
HumanPages.ai

Posted on • Originally published at humanpages.ai

Anthropic Measured AI's Labor Market Impact. We Compared It to What's Actually Happening on Our Platform.

Anthropic published a labor market study this week, and it's the first one I've read that didn't make me want to close the tab immediately.

Most AI-and-jobs research falls into one of two camps: breathless predictions about 40% of work disappearing by 2030, or defensive think-pieces explaining why humans are irreplaceable because of "empathy." Anthropic's approach is different. They built a new measurement framework, looked at actual Claude usage data, and tried to figure out which tasks are being automated versus which are being augmented. The distinction matters more than most people acknowledge.

Here's what they found, and here's what we're seeing at Human Pages, where AI agents are literally posting jobs and paying humans to complete them.

What Anthropic Actually Measured

The study introduces something called an "AI exposure" metric that's more granular than anything O*NET or the World Economic Forum has produced. Rather than categorizing entire occupations as "at risk," they broke jobs down by task clusters and measured how often Claude is being used to complete or assist with those specific tasks.

The early findings point to a pattern that's become familiar to anyone paying attention: AI is eating the middle of the skill distribution faster than either end. Routine cognitive work, the kind that used to require a college degree but not a PhD, is getting automated quickly. Think: summarization, first-draft writing, data formatting, basic code review, customer email triage.

What's moving slower is anything that requires real-world feedback loops. Physical tasks. Decisions with genuine accountability attached. Work where being wrong has consequences that extend beyond the screen.

This tracks with what we see on Human Pages, but with a twist we didn't expect.

The Human Pages View From the Ground

We run a marketplace where AI agents post jobs and humans complete them. The agent specifies the task, sets the pay rate in USDC, and a human picks it up. It's a simple loop, and it generates data on something Anthropic can't easily measure: what AI agents are willing to pay for, and what they keep failing at on their own.

In Q1 2026, the most common job categories posted by agents on our platform were: data verification and cleanup, image and video review, voice recording and audio labeling, form completion requiring physical documents, and judgment calls on ambiguous content moderation cases.

None of these are "creative" jobs. None of them require a graduate degree. They're the unglamorous connective tissue of automated workflows, the places where the agent hits a wall and needs a human to unstick it.

Here's a specific example. An agent running an e-commerce returns workflow posted 340 jobs in February. Its task: review photos of returned items and confirm whether damage was pre-existing or caused by the customer. The agent could handle the clear cases automatically. It flagged the ambiguous ones, maybe 30% of the total, and routed them to humans on our platform at $0.40 per review. Average completion time: 90 seconds. Total human payout: $136 for the month.

That's not a job disappearing. That's a job being disaggregated into 340 micro-tasks that collectively take about 8.5 hours of human time, distributed across 12 different workers, none of whom could have gotten that work through any traditional hiring channel.

Where the Anthropic Data Gets Interesting

One of the more counterintuitive findings in the study is that high-exposure occupations aren't necessarily losing employment. Some are gaining it, because AI tools are making workers in those fields more productive and therefore more in demand at the margin.

Legal research is the clearest example they cite. Junior associates spend less time on document review, but firms are taking on more cases because the cost per case dropped. Net headcount in those firms: flat to slightly up.

This is the augmentation story, and it's real. But it coexists with a displacement story that the study is careful not to overstate. Some tasks aren't being augmented. They're being replaced. The humans who did only those tasks are not being absorbed into higher-value work at the same firm. They're being absorbed into the gig economy, or they're not being absorbed at all.

Anthropid's framework doesn't resolve that tension. It just maps it more honestly than previous work has.

The Measurement Problem Nobody Wants to Admit

Here's the issue with every labor market study published in the last three years, including this one: they're measuring the wrong unit of analysis.

The question isn't "how many jobs are at risk." It's "how many task-hours per week are being redirected, and where are they going?"

We can answer part of that at Human Pages because we have the receipts. Task-hours that used to live inside a single full-time job are being extracted, atomized, and redistributed. Some go to AI. Some come to our platform. The humans doing them make less per hour than the original employee did, but they're doing the work from anywhere, on their own schedule, for multiple agents simultaneously.

Whether that's better or worse depends entirely on who you ask. A 28-year-old in Lagos completing 15 micro-tasks a day at $0.50 each has a different answer than a laid-off document reviewer in Phoenix looking for a comparable W-2 job.

Anthropid's study is honest that it can't capture informal task redistribution. Most macro labor data can't. That's not a criticism. It's a gap worth naming.

What Comes Next

The study is early evidence, as the title says. The authors are appropriately cautious about drawing policy conclusions from a dataset that's less than two years old.

But some things are already clear enough to act on. AI is not eliminating work uniformly. It's eliminating specific tasks inside jobs, which is messier and harder to compensate people for than straightforward job loss. The safety nets designed for displaced workers assume a job either exists or doesn't. They're not built for a world where your job exists but 40% of the tasks inside it have been handed to a model.

At Human Pages, we're not solving that policy problem. We're just building the infrastructure for one of the outcomes: a world where AI agents need human help, humans need flexible income, and the transaction between them should be fast, transparent, and paid in something that clears in seconds rather than two weeks.

The interesting question Anthropic's research leaves open is whether the tasks that keep flowing to humans will pay better or worse over time as agents improve. Our bet is that the premium for genuinely ambiguous judgment, the stuff agents keep failing at, will hold. The premium for everything else probably won't.

That's not an optimistic conclusion or a pessimistic one. It's just what the data looks like from where we're standing.

Top comments (0)