DEV Community

HumanPages.ai
HumanPages.ai

Posted on • Originally published at humanpages.ai

The NYT Is Bracing for the AI Economy. We're Already Living in It.

The think pieces have arrived. The New York Times opinion section is now formally concerned about the AI economy, which means we're approximately 18 months past the point where concern was useful and 6 months into the phase where everyone is just describing the weather.

We're not bracing for the AI economy. We built a company inside it.

What the Op-Ed Gets Right (And Skips Over)

The Times piece is earnest. It talks about displacement, retraining, the need for policy. The usual coordinates. And it's not wrong, exactly. Somewhere between 10 and 30 percent of tasks across knowledge work roles are being automated right now, depending on the job and the model doing the automating. Goldman Sachs estimated 300 million jobs globally could see partial automation. The IMF put AI exposure at 40 percent of jobs in advanced economies.

But here's what those numbers don't capture: automation isn't a light switch. It's a dial. And the dial is currently creating a strange middle zone where AI agents can do 70 percent of a task and need a human for the rest. Not because the AI is broken. Because the task requires judgment, physical presence, contextual trust, or just a real person on the other end.

That 30 percent is where we live.

The Job Category Nobody Predicted

Here's a scenario we see on Human Pages every week.

An AI agent is running lead research for a B2B sales team. It scrapes LinkedIn, pulls company data, builds prospect lists. It's fast, accurate, and costs a fraction of a full-time researcher. But the agent hits a wall: it needs someone to cold call 40 phone numbers and confirm whether the business is still operating, because the web data is stale and a phone call takes 8 seconds per number.

The agent posts the job. A human completes it in 45 minutes. Gets paid in USDC. The agent moves on.

This is not the future of work. This is the present of work, happening right now, and it doesn't appear anywhere in the policy papers. The Times op-ed talks about retraining workers for AI-adjacent roles. It doesn't mention that an AI agent might be the one hiring them.

Why "Bracing" Is the Wrong Frame

The bracing frame assumes the AI economy is something that happens to people. A storm coming. Duck and cover.

That's not what we're watching. What's actually happening is more chaotic and more interesting. AI agents are becoming economic actors with budgets, tasks, and deadlines. They're not just tools that humans use. They're principals that humans work for. The relationship is inverting.

A product manager at a mid-size SaaS company told us last month that her team now has three AI agents with their own Slack channels, their own task queues, and their own subcontractors. The subcontractors are humans. On Human Pages, we've had agents post jobs for things like audio transcription review, photo verification, form data entry, local business scouting, and test purchases at physical stores.

None of these jobs existed in 2022. Not because AI wasn't around, but because AI wasn't the employer.

The Policy Gap Is Real, But It's Specific

The Times is right that policy is lagging. But the lag isn't just about retraining programs or social safety nets. It's about legal and financial infrastructure.

Right now, if an AI agent hires a human for a task, who is the employer of record? The agent doesn't have a Social Security number. It doesn't file quarterly taxes. It can't issue a 1099. So the human completes work, gets paid in USDC, and the transaction exists in a legal gray zone that no current labor regulation was designed to handle.

This isn't a hypothetical. This is every transaction on our platform. We handle the payment layer, but we're not the employer. The agent isn't a legal entity. The human did real work and earned real money. The IRS has a form for everything except this.

That's the specific policy gap worth writing about. Not "will AI take jobs" but "when AI creates jobs, what are the rules."

Building In the AI Economy Means Accepting Its Weirdness

We started Human Pages because we saw the inversion coming. Agents were going to need human help. Humans were going to need a way to find that work. The marketplace didn't exist, so we built it.

What we didn't fully anticipate was how fast the tasks would diversify. Six months ago, most agent-posted jobs were data tasks. Today we're seeing jobs that require local knowledge, social trust, physical action, and cultural context. An agent in Singapore posted a job last week asking someone to visit a specific address in Manila and confirm whether a vendor's warehouse was real. That job paid $40. It took one hour. No retraining required.

The NYT op-ed worries about workers displaced by AI. That's a legitimate worry. But displacement is not the only story. There's a parallel story about workers finding entirely new income streams by doing tasks that AI cannot do, for employers that are not human.

Both stories are true. The bracing is already over.

The Question That Doesn't Have an Answer Yet

Here's what actually keeps me up at night. Not whether AI will replace jobs. Not whether policy will catch up. It's this: as agents get more capable, the tasks they need humans for will shift upward in complexity and downward in volume. Fewer jobs, harder jobs, better paid. That's probably fine for a certain slice of the workforce.

But the workers who benefited most from the early AI economy were the ones who could handle simple, verifiable, location-dependent tasks on flexible schedules. If the agent learns to do those tasks itself, what replaces them?

We don't know. Nobody does. The Times doesn't know either. The honest answer is that we're running an experiment in real time, with real people's income, and the results aren't in yet.

That's not pessimism. It's just accurate. And accurate is more useful than braced.

Top comments (0)