DEV Community

CallmeMiho
CallmeMiho

Posted on • Originally published at fmtdev.dev

Stop Building REST APIs for AI Agents (Use JSON-RPC)

Hey DEV community, CallmeMiho here. I’ve been auditing backend architectures all week, and the amount of money being burned on "AI integration" is staggering. If you are still trying to force your AI agents to navigate hypermedia-driven REST APIs, you aren't an architect; you're a museum curator. Let's talk about why the entire industry is moving back to a 20-year-old protocol.


We spent the last decade arguing over REST maturity models and GraphQL schema stitching, only to find out that the autonomous agents of 2026 don't care about HTTP verbs. They care about executing functions.

This is why JSON-RPC 2.0—a protocol originally defined in the mid-2000s—has experienced a massive renaissance. It is the engine driving modern AI agent tool calling and the foundational layer of the Model Context Protocol (MCP). It isn’t glamorous. It’s just brutally effective.

Why do AI Agents use JSON-RPC instead of REST?

Traditional REST APIs are resource-oriented (Nouns). If you want to update a user, you send a PUT request to /users/123.

AI agents, however, are action-oriented (Verbs). When an LLM like Claude or ChatGPT needs to query a database or search the web, it doesn't want to navigate a complex URL structure. It just wants to call a method like read_file and pass arguments. JSON-RPC maps perfectly to this paradigm.

Feature JSON-RPC 2.0 REST GraphQL
Paradigm Action-Oriented (Verbs) Resource-Oriented (Nouns) Query-Oriented (Graph)
Overhead Extremely Lightweight High (Headers/Paths) Very High (AST Parsing)
Ideal for AI 🟢 Perfect (Maps to Tool Calling) 🟡 Usable but bloated 🔴 Too complex for agents

The Anatomy of a JSON-RPC 2.0 Request

To understand why this is the 2026 standard, you have to look at the wire. A JSON-RPC 2.0 request is beautifully simple:


json
{
  "jsonrpc": "2.0",
  "method": "check_inventory_stability",
  "params": {"sku": "AEC-990-2026", "warehouse_id": "NYC-01"},
  "id": "agent-session-42"
}
(Pro-Tip: When debugging these requests locally, always run them through an offline JSON syntax checker to ensure your model hasn't dropped a comma).
The "Parsing Nightmare" of Agentic Workflows
The simplicity of JSON-RPC comes with one massive vulnerability: syntactic fragility.
AI agents are probabilistic text generators. If an LLM hallucinates a trailing comma, forgets to escape a quote, or wraps the payload in Markdown backticks, the JSON-RPC payload becomes malformed.
When a malformed payload hits your backend, it doesn't just throw a 400 error; it instantly crashes the AI Agent's reasoning loop. The agent enters a "Parsing Nightmare," spinning in an infinite loop trying to correct its syntax, burning massive amounts of API tokens in the process.
The Fix: Strict Validation
You cannot fix this with better prompt engineering. You must enforce structural integrity at the architecture level.
  

Never pass raw LLM output to an MCP server without validation. Use Zod schemas to guarantee the shape of the params object. If the JSON doesn't match the Zod schema, reject it before it hits your infrastructure.
If you don't want to write the boilerplate manually, you can instantly convert your JSON payloads into TypeScript Zod validators using automated client-side tools.
In 2026, the oldest protocols are running the newest intelligence. Respect the wire, validate your payloads, and stop building REST APIs for machines that just want to call functions.
P.S. If you want to audit your token payloads or generate your schemas without leaking production secrets to a cloud server, check out my suite of local-only developer utilities.
Enter fullscreen mode Exit fullscreen mode

Top comments (0)