DEV Community

Petter_Strale
Petter_Strale

Posted on • Originally published at strale.dev

I Scanned 10 Developer Tools for AI Agent-Readiness. Only One Passed.

Everyone's building AI agents. Nobody's building for them.

I've been working on agent integrations and kept running into the same problem: when we say "AI agents can use APIs," how many developer tools are actually set up for an agent to discover and interact with autonomously?

So I ran an agent-readiness audit on 10 well-known developer tools. The scanner checks 32 signals across 6 categories:

  • Discoverability — can agents find you?
  • Comprehension — can agents understand your API?
  • Usability — can agents interact with you?
  • Stability — can agents depend on you?
  • Agent Experience — what happens when an agent shows up?
  • Transactability — can agents do business with you?

Each category gets a tier: Ready, Partial, or Not Ready. To "pass," a tool needs at least half its categories at Ready.

One tool passed. Nine didn't.


The Results

Resend — 4/6 Ready ✅

The clear winner, and it wasn't close. Resend has an MCP endpoint at /.well-known/mcp.json, a public OpenAPI spec with 39 fully documented endpoints, Schema.org structured data on the homepage, and returns proper JSON errors with consistent structure.

This is what agent-readiness actually looks like: an agent can discover the API through protocol-based discovery (MCP), understand the full API surface through machine-readable specs (OpenAPI), and verify what the product does through structured data. No human intervention required at any step.

Stripe — 1/6 Ready (Stability)

The most surprising result on the list. Stripe literally invented the Agentic Commerce Protocol. They have a proper llms.txt file. They are, arguably, the most developer-friendly API on the internet.

But: no OpenAPI spec at standard discoverable paths, no MCP endpoint, no /.well-known/agent.json. Agent Experience scored Red. An agent hitting Stripe's /api endpoint gets an HTML page, not a machine-readable spec.

The company building the future of agent payments isn't agent-ready itself.

Vercel — 1/6 Ready (Stability)

Good fundamentals — changelog, status page, proper security headers. But the OpenAPI spec is behind a login wall, and there's no MCP endpoint. An agent would find the documentation but couldn't machine-read the API surface without authenticating first.

Postmark — 1/6 Ready (Discoverability)

Has structured data and llms.txt, which is already more than most. But the scanner got rate-limited (429) on half its checks — the pricing page, signup page, and several API paths all returned "Too Many Requests."

This is actually an interesting finding in itself: aggressive rate limiting without rate-limit headers means agents get blocked with no way to self-throttle. If your rate limiter doesn't include Retry-After or X-RateLimit-* headers, you're not just blocking abuse — you're blocking legitimate agent discovery.

Clerk — 0/6 Ready

Has an impressive 2,395-line llms.txt — more content than most tools' entire documentation. But no OpenAPI spec at discoverable paths, no MCP, and the Terms of Service appear to prohibit automated access.

All that content investment is invisible to protocol-based agent discovery.

Neon — 0/6 Ready

The llms-full.txt is 32,588 lines — by far the largest in the set. But Comprehension scored Red because there's no OpenAPI spec at discoverable paths, and no structured data on the homepage.

This is the clearest example of a pattern I kept seeing: heavy investment in LLM-readable content while missing the machine-readable infrastructure that agents actually use for discovery.

Supabase — 0/6 Ready

This one genuinely surprised me. Supabase has an MCP server — I use it daily. But /.well-known/mcp.json returns 404. No structured data, no OpenAPI spec at standard paths, llms.txt exists but was flagged as basic. Discoverability scored Red.

The tools are there, but the front door isn't wired up for autonomous discovery. An agent using MCP protocol-based discovery would never find Supabase's MCP server.

Plaid — 0/6 Ready

Documentation is good, sandbox is available, but no MCP, no OpenAPI at standard paths. Pricing is behind a table with no structured data. An agent comparing fintech APIs programmatically wouldn't be able to include Plaid in its evaluation.

Twilio — 0/6 Ready

Both Discoverability and Agent Experience scored Red. The llms.txt exists but was too large to parse (body truncated). No MCP, no OpenAPI at discoverable paths. For a company that's been API-first for 15+ years, the agent layer is essentially absent.

SendGrid — 0/6 Ready

Inherits Twilio's infrastructure (it's a Twilio product now), so same limitations. The openapi.json path returns an HTML page instead of JSON — which is a particularly frustrating failure mode for an agent expecting structured data.


What the Data Actually Shows

The gap isn't API quality. Every tool on this list has APIs that developers love. The gap is in the discovery and machine-readability layer — the difference between "a developer can read our docs" and "an agent can programmatically find, evaluate, and start using our API."

Three specific findings stood out:

Almost nobody has /.well-known/mcp.json

Only Resend. This is the standard path for MCP protocol-based discovery — how an agent using MCP finds your server endpoint. Without it, your MCP server might exist, but agents following the protocol can't discover it.

llms.txt adoption is strong — but insufficient

8 out of 10 tools had an llms.txt file. That's encouraging adoption. But llms.txt solves a different problem than agent discovery. It helps LLMs understand what your product does. It doesn't help an agent using MCP or A2A protocol-based discovery find your API.

An agent browsing /.well-known/mcp.json endpoints won't read your llms.txt. Different protocols, different discovery mechanisms.

Size of llms.txt doesn't correlate with readiness

This was the most counterintuitive finding. Neon's 32,588-line file didn't help because the structured infrastructure around it was missing. Resend's much smaller file worked because it had OpenAPI, MCP, and structured data backing it up.

The lesson: llms.txt is valuable when it sits on top of solid machine-readable infrastructure. It's not valuable as a replacement for that infrastructure.


What This Means for Builders

If you're building agents that need to autonomously discover and evaluate APIs, you're going to hit walls everywhere. The infrastructure layer that MCP, A2A, and x402 assume — machine-readable discovery, structured pricing, programmatic auth documentation — barely exists yet.

Build your agents to handle graceful degradation, because most APIs are still built for human developers browsing documentation pages, not autonomous agents making programmatic decisions.

And if you're building an API: the bar for agent-readiness is lower than you think. Resend passed with straightforward infrastructure — a public OpenAPI spec, an MCP endpoint at the standard path, and structured data on the homepage. No exotic technology. Just the basics, done correctly.


The scanner is free at scan.strale.io if you want to check your own stack. Happy to share the full JSON scan reports for any of these tools — just ask in the comments.

Top comments (0)