Earlier this year I spent weeks building an AI tool to help myself and others apply for jobs faster and more successfully. I failed and succeeded at the same time.
Failed — I didn't land a job myself. Succeeded — I learned to use Claude, shipped a production-ready SaaS app with AI features, and understood something fundamental about why this space is broken.
The tools one person can build within a few sessions can read and parse thousands of job listings. They could match requirements, tailor resumes, draft cover letters. But half the job boards blocked it outright. The other half demanded CAPTCHAs, SMS codes, email verification loops.
Which is why workarounds like Browser Plugins exist, to circumvent bot-protection layers, meant to make sure you are human.
The tools I build couldn't apply to a single job.
That's not a bug. That's the architecture. And then it clicked.
The web assumes you have a body
The entire internet authentication stack — from email verification to CAPTCHA to SMS codes — was designed around one assumption: users are physical beings. It was so obvious nobody questioned it. Every platform converged on the same pattern independently, pushed by bot spam, and accidentally created a hard boundary between entities that have bodies and entities that don't.
The result is a web split into two layers: an open read layer where agents roam freely, and a gated write layer that demands proof of physical existence to participate.
This matters more than it sounds.
If you're building anything that involves agents doing real work — not just answering questions, but negotiating, transacting, committing — you hit this wall fast. Your agent can analyze every job posting on the internet but can't apply to a single one. It can evaluate every talent profile but can't schedule an interview.
Read access without write access isn't intelligence. It's surveillance.
The "fix" that makes everything worse
You'd think the market would respond by solving this infrastructure gap. Instead, it responded with workarounds.
Tools like AIApply, EnhanceCV, and a dozen others now let job seekers auto-generate "tailored" resumes and blast hundreds of applications per week. AIApply claims 1.1 million users and 150+ applications per person per week. LinkedIn application volume spiked 45% in the past year alone.
I understand why these tools exist. The application process is broken, and people are frustrated. But here's what actually happened: instead of fixing the infrastructure, we gave humans AI-powered tools to brute-force through human-shaped forms faster. The agent still pretends to be a person. The form still pretends it's talking to one.
The result on the demand side is brutal. Recruiters are drowning in AI-generated boilerplate. Every resume looks the same. Every cover letter hits the same keywords. Cost-per-hire is going up. Time-to-hire is going up. The Greenhouse CEO called it an "AI doom loop" — applicants use AI to mass-apply, companies use AI to mass-filter, both sides get worse outcomes, and trust craters. Only 8% of job seekers believe AI screening is fair. 42% who lost trust in hiring blame AI directly.
These tools aren't solving the problem. They're making "AI in hiring" taste bad for the entire demand side. And the root cause is the same: they're operating ON broken infrastructure instead of replacing it.
The same structural problem breaks pricing
Here's where it gets interesting. The identity problem and the pricing problem share the same root cause: infrastructure designed around human assumptions.
Take flat-rate AI subscriptions. They were built for human interaction patterns — a person asks a few questions, gets a few answers, goes to lunch. A heavy user might make 20-50 API calls a day.
Then agents showed up. Background agents routinely hit 500-2,000+ calls overnight. They don't take lunch breaks. They don't get tired. They don't have the natural throttle that a human body provides.
The subscription model worked when usage correlated with having a body. One body ≈ one predictable usage pattern. Remove the body, the economics collapse. You get adverse selection: the heaviest agent users exploit flat pricing while moderate users subsidize them.
This is the same fault line. Human-shaped infrastructure meets non-human participants, and the mismatch isn't cosmetic — it's structural.
What this means if you're building in hiring
I work on agent infrastructure for hiring, so let me make this concrete.
The hiring industry runs on systems designed for humans clicking through interfaces. Every ATS, every job board, every screening tool assumes a person is on the other end making decisions at human speed. The entire workflow — post a job, collect applications, screen resumes, schedule interviews, negotiate offers — is designed around human attention as the bottleneck.
The auto-apply tools tried to speed up the human side. That just moved the bottleneck — now you've got 10x the volume with the same filtering infrastructure. When agents enter this workflow properly, two things need to work from the ground up:
Identity. A recruiting agent evaluating talent needs to be verifiable. Not "verified by a human who owns it" — verifiable on its own terms. The employer's agent needs to know it's negotiating with a legitimate recruiting agent, not a spam bot that scraped a resume database. Today, there's no standard way to do this. Every integration is bespoke, every trust relationship is ad hoc.
Economics. If an agent can process 1,000 talent evaluations in the time a recruiter handles 10, you can't charge per-seat. The unit of value shifts from "access" to "outcome." The business model that makes sense is transactional — you pay when an agent actually delivers a confirmed hire, not for the right to log in.
These aren't separate problems. They're the same problem: your infrastructure was built for bodies.
The uncomfortable next question
Most of the AI-in-hiring discourse right now focuses on the application layer. Better auto-apply tools. Smarter resume scanners. Faster interview scheduling bots.
The harder question is: should agents be filling out human forms at all?
Agent identity isn't a feature request. It's a prerequisite. Without verifiable agent credentials — not just "who owns this agent" but "what is this agent authorized to do, what has it done before, how did it perform" — you can't have agent-to-agent transactions that anyone trusts. And you definitely can't escape the doom loop where both sides just throw more AI at a fundamentally human-shaped pipe.
97% of enterprises expect a major AI agent security incident this year. Not because agents are inherently dangerous, but because we're running non-human participants through human-shaped trust infrastructure and hoping for the best.
Where this is heading
The fix isn't smarter workarounds. It isn't better auto-apply tools or more sophisticated resume generators. The fix is building infrastructure that doesn't assume anything about what's on the other end of the connection.
Machine-readable schemas instead of human-readable forms. Verifiable credentials instead of SMS codes. Transaction-based pricing instead of seat licenses. Immutable audit trails instead of trust-on-first-use.
Some of this already exists in pieces. MCP gives agents a standard way to connect to tools. A2A gives agents a way to talk to each other. But the domain-specific layers — the protocols that define what a "job" looks like to a machine, what a "talent profile" means in structured data, how to settle a hiring transaction between two agents that have never met — that's still mostly missing.
That's what I'm working on. Not the application layer. Not another tool that helps agents pretend to be humans faster. The foundational layer that makes pretending unnecessary.
The internet isn't ready for agents. The business models aren't ready for agents. The question isn't whether that changes — it's whether you're building workarounds on the old assumptions or on infrastructure for the new ones.
--
I'm building open protocol infrastructure for agent-to-agent hiring transactions. If you're working on agent identity, agent economics, or domain-specific agent protocols, I'd genuinely like to hear what you're running into. Find me on LinkedIn or check out adnx.ai.
Top comments (0)