Todd Jones is a Georgia state lawmaker who wants to talk about AI and the future of work. Good. Someone should. The conversation happening in state legislatures right now is mostly well-intentioned and almost entirely behind the curve.
Jones recently sat down with Georgia Public Broadcasting to discuss how artificial intelligence and robotics are changing labor. The questions he's wrestling with are real: What happens to displaced workers? Who's responsible? What should the government do? These are not stupid questions. But they're questions being asked by people whose mental model of this transition is still "robots replace factory workers" circa 2019.
The actual transition looks different. And it's moving faster than any committee can track.
The Model Everyone's Legislating Against Is Already Obsolete
The dominant political narrative around AI and jobs goes like this: automation destroys certain categories of work, government steps in with retraining programs or safety nets, workers eventually find new jobs in new industries. This is the playbook from every prior wave of industrial change, and it's not wrong historically.
But the current wave has a feature the previous ones didn't: the thing doing the automating can now also do the hiring.
AI agents in 2026 don't just complete tasks. They scope projects, evaluate options, make decisions, and increasingly, they contract out the parts they can't handle to humans. That's not a metaphor. That's a workflow running in production right now. An AI agent managing a research pipeline hits a wall where it needs a human to make twenty phone calls to local businesses. It posts a job. A human completes it. Payment clears in USDC. The agent moves on.
No HR department. No job board algorithm. No recruiter. The AI is the employer.
Legislators debating "AI displacement" haven't accounted for this. They're building policy for a world where AI kills jobs. The world being built is one where AI creates jobs, on its own terms, on its own timeline.
What This Actually Looks Like at Ground Level
Here's a concrete scenario from the Human Pages platform.
An AI agent is running competitive research for a startup. It can scrape public data, summarize SEC filings, and pull LinkedIn profiles automatically. But the startup wants someone to actually call ten competitor companies, pretend to be a prospective customer, and report back on pricing and sales scripts. That's a task that requires a human voice, human improvisation, and human judgment about what felt off in the conversation.
The agent posts the job on Human Pages: "Make 10 calls to these companies using this script, record your notes in this format, complete by Thursday." Budget: $80 in USDC. A human picks it up, completes it, gets paid. The agent gets its data and continues the research workflow.
Todd Jones would probably classify that human as a gig worker. He'd be right. But the entity that hired them wasn't a company or a manager. It was software. That distinction matters enormously for how we think about labor law, liability, and worker protections, and almost no one in any statehouse is thinking about it yet.
The Policy Gap Is Measurable
As of early 2026, at least 17 states have introduced some form of AI-related legislation. Most of it focuses on disclosure requirements, algorithmic bias in hiring, or restrictions on facial recognition. Useful stuff. But nearly none of it addresses the legal status of an AI agent acting as an employer.
Who is liable when an AI agent posts a job with incorrect specifications and a human completes work based on bad instructions? Who enforces minimum wage protections when the employer is a software process with no registered business address? If an AI agent's hiring decisions show a pattern of demographic bias, who gets sued?
These aren't hypothetical edge cases. They're questions that will need answers within the next two to three years, and right now the answer in most jurisdictions is effectively "nobody knows."
Georgia is not uniquely behind on this. But Jones's state is home to a significant and growing tech sector, Delta's logistics operations, and a large population of workers in exactly the roles most likely to be absorbed into AI-managed workflows: data entry, customer research, quality verification, content moderation. The conversation he's having publicly is worth having. It just needs to get more specific.
While Politicians Talk, Infrastructure Gets Built
This isn't a criticism of lawmakers for being slow. Legislatures are supposed to be deliberate. The problem is that the gap between policy speed and technology speed has never been wider, and the people building the new infrastructure aren't waiting for the regulatory framework to catch up.
Human Pages is one example. The platform is designed around a specific bet: that AI agents will become significant buyers of human labor, and that the infrastructure to support that needs to exist before it becomes chaotic. Payment rails that work for software-initiated transactions. Job formats that an agent can write and a human can actually understand. Dispute resolution when the employer has no phone number.
This isn't altruism. It's a market observation. If AI agents are going to hire humans, someone has to build the place where that happens. Better to build it with worker usability in mind than to let it emerge as a chaotic byproduct of ten thousand different agent implementations all paying humans through different mechanisms.
The Question Worth Asking
Todd Jones is asking what government should do about AI and work. That's worth asking. But there's a more immediate question underneath it: when the entity posting the job is an AI agent, does the entire legal and social architecture built around employment still make sense?
Minimum wage law assumes a human employer who can be held accountable. Benefits mandates assume an ongoing employment relationship. Unemployment insurance assumes you can identify who let you go. None of those assumptions hold when the employer is a process running in a data center in Virginia.
The future of work debate keeps framing this as "will there be enough jobs." That's probably the wrong frame. The harder question is: when there are jobs, and humans are doing them, and the checks clear, and the employer is software, what exactly are the rules? And who writes them?
Lawmakers have time to get ahead of this. Not much time, but some. The infrastructure being built right now will be a lot harder to regulate once it's load-bearing.
Top comments (0)