A quiet shift is happening in developer experience:
Your API docs are not only for humans anymore.
They are for agents.
Agents read schemas, tool descriptions, OpenAPI specs, MCP manifests, examples, error messages, and logs. They decide which function to call, what arguments to pass, how to recover from failures, and whether to retry.
That means ambiguous API design is no longer just annoying. It directly reduces agent reliability.
Agents Are Silent API Users
Human developers complain in issues, Discord, Slack, Stack Overflow, or support tickets.
Agents usually do not.
They fail in quieter ways:
- choose the wrong tool
- pass subtly invalid arguments
- retry the same bad call
- hallucinate missing enum values
- ignore a useful endpoint
- give up and answer with incomplete context
If you only monitor HTTP status codes, you miss most of the story.
What Makes An API Agent-Ready?
1. Tool Names Should Be Boring
Agents do better with literal names.
Prefer:
search_customer_ticketsget_invoice_by_idcreate_refund_request
Avoid:
lookupprocessexecuteadvanced_search_v2
Human developers can click through docs. Agents need the affordance in the name.
2. Descriptions Need Operational Boundaries
A good tool description says what the tool does and when not to use it.
Example:
Search customer support tickets by keyword, customer id, or date range. This returns ticket metadata and message excerpts only. Do not use for billing records or account permissions.
That last sentence is not fluff. It helps the agent avoid bad tool selection.
3. Errors Should Teach Recovery
Bad error:
invalid request
Better error:
customer_idmust be a UUID. Usesearch_customersfirst if you only have an email address.
Agents can recover from good errors. They spiral on vague ones.
4. Enum Values Need Examples
If a parameter accepts status, list valid values and show realistic examples.
Do not rely on the model guessing whether the value is:
in_progressin-progressIN_PROGRESSprocessing
Small inconsistencies waste agent loops.
5. Side Effects Need Explicit Marking
Every tool should be clearly labeled as:
- read-only
- write
- destructive
- external-facing
- money-moving
- permission-changing
This is not just documentation. It becomes the foundation for approval workflows.
Observability Has To Change
If agents are API users, API analytics should include agent-specific signals:
- tool selected
- rejected arguments
- repeated invalid calls
- recovery success rate
- average retries per task
- common schema confusion
- abandoned tool sequences
The next generation of API monitoring will not only ask, "Was the endpoint healthy?"
It will ask:
Could the agent successfully use it?
Internal Data Is Part Of The Same Problem
A lot of companies are excited about the "agentic web," but the bigger immediate gap may be internal.
Your product docs, customer feedback, support tickets, runbooks, metrics, and codebase may be scattered across ten systems. If an agent cannot query that context cleanly, it will not matter how good the model is.
Agent-ready APIs are really agent-ready knowledge systems.
The Takeaway
APIs used to be designed for deterministic clients and human developers.
Now they also need to be designed for probabilistic tool users.
That means clearer names, tighter schemas, better errors, stronger examples, explicit permissions, and telemetry that shows where agents get confused.
The companies that do this well will make their products easier for both humans and agents.
Sources Worth Reading
- Hacker News: "AgentVoice - Get feedback from AI to improve your MCP" https://news.ycombinator.com/item?id=47148311
- Reddit r/mcp discussion on internal data readiness https://www.reddit.com/r/mcp/comments/1rv338m/im_convinced_the_agentic_web_is_coming_but_most/
- Forem/DEV API docs for programmatic publishing patterns https://developers.forem.com/api/v1
Top comments (0)