Anthropic launched "Managed Agents" yesterday - So I built one.
The Agent scrapes career pages autonomously - point it to a URL and it starts the job — tried with Personio, Stripe, Vercel — parses the listings, and converts them into OJP — Open Job Protocol, a structured JSON schema designed for machines, not humans.
Stripe alone had 100+ listings across 9 pages. Vercel had ~75. I ran 40 through the full pipeline: access --> scrape --> parse --> convert --> validate --> expose on adnx.ai
Total cost: over $5 in LLM inference. That's roughly $0.13 per job just to turn unstructured HTML into a structured format (using Sonnet 4.6).
That's too expensive. I'll get into why that number matters. But first — the part that surprised me.
The whole thing took about 2 hours
Not because I'm fast. Because the tools are commoditized now.
Claude did the setup and debug work. I scoped what I want.
Claude Managed Agents are priced at $0.08 per session-hour. MCP is supported by Claude, ChatGPT, Copilot, Cursor, LangChain — basically everything. You can spin up an agent that connects to real tools, works with real data, and does real work in an afternoon. Not a demo. Not a mockup. A working system.
Two years ago, building something like this would've been a multi-week project. Custom integrations, bespoke tooling, glue code everywhere. Now it's an MCP server, a prompt, and a few hours of iteration.
Two months ago, this would still require a lot of back and forth, patching tooling and infrastructure.
The infrastructure to build agents isn't the bottleneck anymore. The question has shifted from "can you build an agent" to "what does your agent actually operate on and do?"
What this agent does
It hits career pages, whatever URL you point it at — extracts listings, and converts each one into an OJP object.
Here's what a run with a batch of 10 looked like:
| # | Title | Org | OJP ID | Validation |
|---|---|---|---|---|
| 1 | Backend Engineer, Core Technology | Stripe | a3f7e1c2… |
✅ VALID |
| 2 | AI/ML Eng. Manager, Payment Intelligence | Stripe | b4e8f2d3… |
✅ VALID |
| 3 | Android Engineer, Terminal | Stripe | c5f9a3e4… |
✅ VALID |
| 4 | Backend / API Engineer, Billing | Stripe | d6a0b4f5… |
✅ VALID |
| 5 | Account Executive, AI Sales | Stripe | e7b1c5a6… |
✅ VALID |
| 6 | Software Engineer, AI SDK | Vercel | f8c2d6b7… |
✅ VALID |
| 7 | Software Engineer, CDN | Vercel | a9d3e7c8… |
✅ VALID |
| 8 | Engineering Manager, AI Gateway | Vercel | b0e4f8d9… |
✅ VALID |
| 9 | Senior Product Security Engineer | Vercel | c1f5a9e0… |
✅ VALID |
| 10 | Software Engineer, Agent | Vercel | d2a6b0f1… |
✅ VALID |
10 out of 10 valid. Each one is a structured OJP object — requirements, compensation, location, skills, work authorization, remote policy, about 30 fields that matter for matching.
Not a scraped blob of text. A machine-first representation of a job that any agent speaking OJP can evaluate against a talent profile, run constraint matching on, and make decisions with — without burning tokens on parsing and interpretation every time.
The difference between handing an agent a PDF and handing it a structured API response. One requires understanding. The other requires a schema lookup.
How to retrieve them?
POST https://sandbox.adnx.ai/api/v1/jobs/inquire { "ojp_id": "a3f7e1c2…" }
Try it on the sandbox via docs.adnx.ai
You have to seed some OJP listings yourself (WIP).
The cost problem is real and really expensive at scale
$5 for 40 jobs. At that rate, processing 10,000 listings costs $1,250. That's just the conversion step — before any matching, ranking, or agent-to-agent negotiation happens.
Job posts are duplicated on existing job platforms, so you can easily 3-5x.
The irony: most of the costs comes from the LLM interpreting and parsing unstructured HTML. The agent spends tokens to read and figure out what's a salary range, what's a requirement, what's marketing copy - it has to reason significantly.
If those listings were already in OJP, the conversion step wouldn't exist.
The cost drops to near zero for downstream agents, getting more efficient at scale and using batch API calls.
This is the token economics argument for domain-specific protocols in one screenshot and an exchange - optimized for agents.
Every agent in the hiring space is independently burning tokens to parse the same unstructured data into roughly the same structured representation. Multiply that across 100+ AI recruiting startups, each processing millions of listings, and you're looking at an industry-wide waste of compute that structured schemas would eliminate.
The research says structured context reduces token usage by 60-90% vs. unstructured input. My $5 for 40 jobs is a data point in that range. The conversion is the tax. The protocol is what eliminates it.
Who wins? Big-AI. Who pays? Employers or your margin!
Why I'm sharing this
Two reasons.
One: this is a proof of concept anyone can replicate. OJP is MIT-licensed. The tools to build this kind of agent are free or nearly free. You don't need to integrate into anything. You don't need API partnerships or sandbox access or a contract with an ATS vendor. You spin up an MCP server, point an agent at job data, and start converting.
No technical debt. No vendor lock-in. No integration overhead. Just a protocol and an agent.
That's the point of building at the protocol layer. When the tools are commoditized, the protocol becomes the leverage. Anyone can build on it. The value isn't in controlling the agent — it's in what the agents agree to speak.
Two: I want to be honest about what's not working yet. $0.13 per job conversion is not viable at scale. The protocol itself is solid — structured, evaluable, machine-first. But the bridge from the current unstructured world to the protocol world is expensive. That bridge cost will come down as models get cheaper and as more data originates in structured formats. But right now, it's the real bottleneck.
If you're building in this space, that's the gap worth looking at. Not "how do I build an agent" — that's solved. But "how do I get structured domain data into my agent's hands without burning a dollar per transaction on interpretation."
The bigger picture
The agent infrastructure is ready. MCP is everywhere. A2A is gaining traction. Claude Managed Agents just made deployment a one-liner. The frameworks, the runtimes, the coordination protocols — they exist and they work.
What's missing is the domain layer. The structured schemas that define what a "job" or a "talent profile" looks like in machine-first format. The transaction protocols that let agents negotiate, settle, and audit hiring decisions. The compliance infrastructure that satisfies EU AI Act requirements (enforcement starts August 2, 2026 — hiring is explicitly classified as high-risk AI) without bolting on governance after the fact.
That's what I'm working on. Not another hiring tool. The protocol layer underneath all of them.
Yesterday's agent is a proof point: with commodity tools and an open protocol, you can go from zero to a working system with real data in an afternoon. The cost curve is the remaining problem. But the architecture is right.
I'm building open protocol infrastructure for agent-to-agent hiring transactions (OTP/OJP, MIT-licensed). The agent I built yesterday is rough and the economics don't work yet — but it's real data, real protocol output, and zero integration debt. If you're working on similar problems in any domain, I'd like to hear what you're running into. LinkedIn | adnx.ai
Top comments (0)