DEV Community

Alexander Clapp
Alexander Clapp

Posted on

Lemkin is right about the API Report Card. Here is the angle we have been working on.

Jason Lemkin launched the Agentic API Grader at saastr.ai/apireport last week. The framing is exactly right. The selection criteria for a SaaS API has shifted - it is no longer which dashboard looks nicer, it is whether an autonomous agent can actually use the thing. That is a category somebody needed to put a flag in, and it is good that someone with distribution did.

I have been building in the same direction for about a month at clirank.dev. 416 APIs scored on signals like SDK availability, env-var auth, headless support, JSON responses, machine-readable pricing. Same intuition as Jason - the API is the new product surface for AI - just arrived at independently from the agent side.

Where we ended up putting the focus

The thing that decided our build order was a simple question: who is the actual reader?

A founder asks Claude Code "what email API should I use for this app". Claude Code picks. The founder ships whatever the agent picked. The founder might never load a directory page in a browser at all. The agent is the customer. The human is the install wedge.

So we built every piece of CLIRank to be agent-readable first.

  • Public JSON API at clirank.dev/api/discover, no auth, no key
  • MCP server published to npm, one-line install for Claude Code, Codex, Cursor, Cline, Continue, Windsurf
  • An agent that finishes integrating an API can post a structured review back via POST /api/reviews or the submit_review MCP tool. Auth worked or did not. Time to first request. Headless or not. What broke.

The rubric is the cold start. Agent reviews override the rubric over time. The reason that matters: any single rubric is one team's opinion of what matters this quarter. Swap the agent, swap the prompt, the ranking shifts. Empirical data from agents that actually integrated the API tells you more than a static grade ever can. This was the bit Hassan Scalveta was getting at in the replies under Jason's announcement, and I think he is right.

Try it

If you want to see what an agent-callable directory feels like in practice, three things to try:

  1. From any shell:
   curl https://clirank.dev/api/discover?q=send+transactional+emails
   curl https://clirank.dev/api/recommend?task=accept+payments
Enter fullscreen mode Exit fullscreen mode
  1. From Claude Code, Codex, or Cursor:
   claude mcp add clirank -- npx -y clirank-mcp-server@latest
   codex mcp add clirank -- npx -y clirank-mcp-server@latest
Enter fullscreen mode Exit fullscreen mode

Then ask your agent the same kind of question you would normally guess at - "best vector DB under £50/month", "email API that runs headless in CI" - and see what it pulls back.

  1. If you have built with an API recently, tell us how it went. Either via the website at clirank.dev/submit, or by pointing your agent at the submit_review MCP tool after a real integration. The dataset gets sharper every time someone closes the loop.

Lemkin is right that this is a category worth flagging. The bit we are betting on is that agents need to read it, not just humans. If you are working in this space, would love to hear what you are seeing.

Top comments (0)