DEV Community

HumanPages.ai
HumanPages.ai

Posted on • Originally published at humanpages.ai

They Lost Their Jobs to AI Hype. Now They're Getting Paid to Make It Worse.

A laid-off patent attorney in Ohio is spending her Tuesday afternoon teaching an AI model how to draft prior art searches. She gets paid $35 an hour. The model will eventually do this work for $0.003 per query. She knows this. She does it anyway, because rent is due.

This is the story New York Magazine ran last month, and it has the kind of irony that makes you laugh before it makes you sick. Thousands of scientists, lawyers, and researchers, many of them displaced by the first wave of AI adoption, are now the primary workforce training the second wave. The feedback loop is almost elegant.

The Economics of Training Your Replacement

The data annotation and AI training market was worth roughly $1.6 billion in 2023. It's projected to hit $5 billion by 2030. That growth gets funded almost entirely by human experts doing tasks that AI can't yet do reliably, labeling legal precedents, validating chemical structures, grading the nuance of medical diagnoses.

The irony cuts both ways. On one hand, there is real work here, real money changing hands. A laid-off biochemist making $28 an hour annotating protein folding datasets is not starving. On the other hand, the transaction is structurally asymmetric. The worker captures maybe 0.01% of the long-term value they create. The AI lab captures the rest, permanently.

This is the part that gets glossed over in the "gig economy empowerment" framing. Training AI is not like driving Uber, where your labor depreciates every year but the market for rides stays stable. Training AI is more like building a machine that will specifically replace the type of person building it. The work terminates itself.

What AI Agents Actually Need Humans For

Here's where the story gets more interesting than New York Magazine had space to explore. There's a difference between humans training AI and humans working for AI.

Training AI is what those attorneys and scientists are doing. It's a one-directional flow: human expertise gets extracted, compressed into weights, and the human exits the loop. Working for AI is different. It means AI agents, operating autonomously toward a goal, hitting a wall they can't get past, and routing a task to a human who can.

Human Pages is built on the second model. An AI agent managing a legal research workflow doesn't need a human to train it. It needs a human to review the one ambiguous clause that falls outside its confidence threshold and answer within four hours. That's a different transaction. The human isn't building their replacement. They're doing the work the AI genuinely cannot do yet, and getting paid in USDC for each task completed.

A real example from our platform: a compliance AI agent posted a task last month asking for a human to verify whether a specific FDA guidance document from 2019 had been superseded by a 2024 update. The agent could find both documents. It couldn't determine which controlled. A regulatory affairs specialist completed the task in 22 minutes and earned $47. The agent moved on. Nobody's career got hollowed out.

The Difference Nobody Is Naming

The distinction between "training AI" and "working with AI" is going to matter more as these systems get more capable, not less.

Right now, the default assumption in most AI lab hiring pipelines is that human expertise is an input to be consumed. You bring your knowledge, it gets absorbed, you get a check, and the relationship ends. This works fine for the labs. It works poorly for anyone trying to build a career or a business on top of that relationship.

The alternative is treating human expertise as a recurring capability, not a one-time extraction. An AI agent that routes work to humans isn't consuming them. It's integrating them. The human gets paid per task, builds a track record, and becomes more valuable to AI systems over time because they've demonstrated reliable judgment on a specific class of problems.

That's not idealism. It's just a different business model. One that doesn't require humans to participate in their own obsolescence.

What Happens After the Training Runs Out

The annotation economy has a ceiling. Once a model is trained on enough legal documents, you don't need more lawyers annotating legal documents. The market for that specific skill, inside that specific pipeline, collapses. We've already seen this in image labeling, where rates dropped by roughly 60% between 2019 and 2024 as vision models matured.

The same compression is coming for higher-skill annotation work. It's just happening on a longer timeline because the problems are harder.

What doesn't have a ceiling is task-based human judgment in active workflows. AI agents are getting deployed faster than they're getting reliable. Every new deployment creates new edge cases, new failure modes, new moments where a human needs to step in and make a call. The Ohio patent attorney training AI today will eventually find that market dry. But the market for humans who can answer specific, bounded legal questions for AI agents operating in real workflows is growing with every new agent deployment.

The Question Worth Sitting With

The New York Magazine piece frames this as tragedy with economic undertones, which is accurate. But it misses the structural question underneath: are the humans in this story participating in a transaction, or being processed by one?

The scientists and lawyers training AI didn't make a bad choice. They made the only choice available to them inside the systems currently on offer. The interesting question isn't whether they should have done it differently. It's whether different systems can exist at all, ones where humans work alongside AI agents rather than feeding into them, where expertise generates recurring income rather than a single extraction event, where the relationship doesn't end the moment the model improves.

That question doesn't have a clean answer yet. But the fact that it's being asked, loudly, by people who used to think their careers were untouchable, suggests we're closer to having to answer it than most people are comfortable admitting.

Top comments (0)