DEV Community

Muhammad Luqman Qamar
Muhammad Luqman Qamar

Posted on

The Internet Has a "Body" Problem: Why AI Agents Are Breaking the Web

 Earlier this year, I set out to build an AI tool designed to help people apply for jobs faster and more effectively. In a way, I succeeded and failed at the exact same time.

I succeeded because I learned how to build a production-ready app using Claude. I created a tool that could read thousands of job listings, match them to a user’s skills, and write perfect cover letters in seconds. But I failed because, despite all that "intelligence," my tool couldn't actually apply to a single job.

Why? Because the internet wasn't built for software; it was built for people with bodies.

  1. The Web Assumes You Are Human Every time you encounter a CAPTCHA (those "click all the traffic lights" boxes), an SMS verification code, or an email login link, you are hitting a wall designed to stop bots.

For decades, the internet’s security stack has been built on one unspoken rule: Users must be physical beings. This creates a massive divide in how the web works today:

The Read Layer: AI agents can roam here freely. They can scrape data, read articles, and analyze job boards.

The Write Layer: This is gated. To actually do something—like submit an application, buy a ticket, or sign a contract—you have to prove you have a thumb to click a button or a phone to receive a text.

If an AI can analyze every job on Earth but can’t click "Submit," it isn't "intelligent" yet. It’s just a very advanced surveillance tool.

  1. The "AI Doom Loop" in Hiring Instead of fixing this fundamental gap, the market has created "workarounds." Companies like AIApply or EnhanceCV allow job seekers to blast out hundreds of tailored applications every week.

On the surface, this sounds great for the applicant. But it has created a disaster for the hiring world, often called the AI Doom Loop:

Applicants use AI to mass-apply to 150 jobs a week.

Recruiters get drowned in thousands of AI-generated resumes that all look perfect and sound the same.

Companies use their own AI to mass-filter those resumes.

Trust collapses. Recruiters are overwhelmed, and job seekers feel like their applications are disappearing into a black hole.

We are currently using AI to "brute-force" human forms. The agent pretends to be a person, and the form pretends to talk to a person. It’s a game of shadows where nobody wins, and the cost of hiring keeps going up.

  1. Why AI Agents Break Business Models It’s not just security that’s broken; it’s the way we pay for things. Most software-as-a-service (SaaS) companies use flat-rate subscriptions (e.g., $20/month).

This model assumes "one body = one predictable amount of work." A human user asks a few questions, gets tired, and goes to lunch. But an AI agent doesn't sleep. A single agent can make 2,000 requests overnight while its owner is snoring.

When you remove the physical body from the equation, the economics of "per-user" pricing collapse. Heavy agent users exploit the flat rate, while light human users end up subsidizing them. The "seat license" model is a relic of a human-only world.

  1. The Solution: Building for Machines, Not Bodies If we want AI agents to actually be useful, we have to stop forcing them to act like humans. We need a new kind of infrastructure built on three pillars:

A. Verifiable Identity
We don’t need more "bot detectors." We need a way for an agent to say: "I am a legitimate recruiting agent authorized by Jane Doe, and here is my cryptographic proof." We need a standard for agents to talk to other agents (A2A) without pretending to be a person clicking a mouse.

B. Outcome-Based Economics
In an agent-led world, "charging per seat" makes no sense. The value moves from access to outcomes. Instead of paying for a monthly login, companies should pay when an agent actually delivers a result—like a confirmed hire or a completed transaction.

C. Machine-Readable Interfaces
Instead of making AI "read" a website designed for human eyes, we need machine-readable schemas. We need protocols where a job posting isn't a wall of text, but a structured data file that an agent can understand instantly and accurately.

  1. Looking Ahead The uncomfortable truth is that 97% of enterprises expect a major AI agent security incident this year. This isn't because AI is "evil," but because we are trying to shove non-human participants through human-shaped pipes.

The future isn't about better "auto-apply" tools or smarter resume scanners. Those are just Band-Aids on a broken system. The real work is building the foundational layer where:

Forms are replaced by data protocols.

SMS codes are replaced by digital signatures.

Subscriptions are replaced by transaction fees.

The internet isn't ready for agents yet, and neither are our business models. The question for builders is simple: Are you building workarounds for the old world, or are you building the infrastructure for the new one?

About the Author: I’m building open-protocol infrastructure for agent-to-agent transactions. If you're working on agent identity or new economic models for AI, let’s connect at adnx.ai.

Top comments (0)