AI agents are becoming the primary interface between developers and APIs. Tools like Claude Code, OpenClaw, and MCP clients don't read your marketing site—they consume your API spec, documentation structure, and machine-readable metadata.
This guide covers the layered approach to making any SaaS API agent-consumable, from discovery to execution.
Overview: All 7 Phases
| Phase | Focus | Impact |
|---|---|---|
| 1 |
llms.txt — Discovery |
High |
| 2 | OpenAPI spec enhancements | Very High |
| 3 | OpenClaw skill | High |
| 4 | MCP server (local) | High |
| 5 | TypeScript SDK | Medium |
| 6 | Remote MCP server | High |
| 7 | JSON-LD & polish | Medium |
Start with Phases 1–2 for the highest ROI. Add MCP server when you're ready for Claude Code and Cursor integration.
The AI-Agent Stack
The ecosystem is converging on a standard layered architecture:
llms.txt → Discovery ("what does this product do?")
OpenAPI spec → Foundation (schema, types, descriptions)
├── MCP server → Agent tool execution
├── TypeScript SDK → Typed client for developers
├── OpenClaw skill → Natural language API guide
└── agents.json → Multi-step flow orchestration
JSON-LD → AI search visibility
The OpenAPI spec is the keystone. Everything else either generates from it or supplements it.
Phase 1: Discovery Layer
Create llms.txt
Serve /llms.txt from your public directory. This is a curated index of what your SaaS does, what the API offers, and links to key documentation.
# YourSaaS
> One-line description of what your product does.
Longer description covering key value props and use cases.
## API Reference
- [Authentication](https://yoursaas.com/docs/auth): API keys, OAuth, rate limits
- [Resource A](https://yoursaas.com/docs/resource-a): What it does
- [Resource B](https://yoursaas.com/docs/resource-b): What it does
## Guides
- [Getting Started](https://yoursaas.com/docs/quickstart)
- [OpenAPI Spec](https://yoursaas.com/api/openapi.json)
Why it matters: LLMs trained after late 2024 increasingly check for llms.txt when encountering new APIs. It's become the robots.txt for AI agents.
Create llms-full.txt (Optional)
An expanded version with full API reference content inlined—every endpoint, request/response schema, and example. Generate this from your OpenAPI spec at build time.
Phase 2: Enhance Your OpenAPI Spec
Your OpenAPI spec is the foundation. Optimize it for LLM consumption:
Agent-Oriented Descriptions
Write descriptions that tell an agent when to use an endpoint:
# Bad
summary: Get user
# Good
summary: Use to retrieve detailed information about a specific user by their ID.
Returns profile data, permissions, and account status.
Use when you need to verify user details before performing actions.
Required Enhancements
| Element | Why It Matters |
|---|---|
operationId |
Clean, camelCase names become MCP tool names (createUser, not postApiV1Users) |
| Realistic examples | Agents generate better requests when they see real values |
| Documented enums | Prevents invalid values in generated requests |
| Side effects | Tell agents what changes ("Sends welcome email", "Charges credit card") |
| Rate limits | Per-endpoint documentation prevents hammering |
| Read-only vs write | Helps agents understand safe exploration vs mutations |
Document Rate Limits in Spec
Add response headers and info section documentation:
headers:
X-RateLimit-Limit:
description: Request limit per minute for your tier
schema:
type: integer
example: 100
X-RateLimit-Remaining:
description: Requests remaining in current window
schema:
type: integer
example: 87
Retry-After:
description: Seconds to wait before retry (on 429)
schema:
type: integer
example: 45
Phase 3: OpenClaw Skill
OpenClaw agents can use your API directly, but a skill makes it natural-language accessible. Create a SKILL.md that teaches agents how to use your API.
Structure:
- Authentication setup
- Rate limits per tier
- All endpoints with request/response examples
- Common workflows ("list active users", "create and send invoice")
- Error handling guidance
Distribution: Publish to ClawdHub for discoverability.
Phase 4: MCP Server
The Model Context Protocol (MCP) is becoming the standard for agent-tool integration. Claude Code, Claude Desktop, Cursor, and OpenClaw all support MCP servers.
Build a Local MCP Server
Start with a stdio server published as an npm package:
npx @your saas/mcp-server --api-key=ys_xxxxx
Tool inventory: Map your API endpoints 1:1 to MCP tools:
| Resource | Example Tools |
|---|---|
| Users |
list_users, get_user, create_user, update_user
|
| Projects |
list_projects, create_project, delete_project
|
| Webhooks |
list_webhooks, create_webhook, test_webhook
|
Resources: Expose documentation as readable resources:
-
docs://api-reference— Full API docs -
docs://rate-limits— Tier limits and usage -
schema://enums— Valid enum values
Document MCP Setup
Add an example .mcp.json to your docs:
{
"mcpServers": {
"yoursaas": {
"command": "npx",
"args": ["-y", "@your saas/mcp-server"],
"env": {
"YOURSAAS_API_KEY": "${YOURSAAS_API_KEY}"
}
}
}
}
Register your MCP server in registries:
- Official MCP Registry
- mcpservers.org
- Smithery
Phase 5: TypeScript SDK
Generate TypeScript types from your OpenAPI spec using openapi-typescript. Wrap with openapi-fetch for a typed client:
import createClient from "openapi-fetch";
import type { paths } from "@your saas/sdk";
const client = createClient<paths>({
baseUrl: "https://yoursaas.com/api/v1",
headers: { Authorization: `Bearer ${apiKey}` },
});
const { data } = await client.GET("/users/{id}", {
params: { path: { id: "abc123" } },
});
Phase 6: Remote MCP Server (Advanced)
For zero-install experience, host a remote MCP server at https://mcp.yoursaas.com/mcp. Users add a URL and authenticate via OAuth.
Hosting options:
- Cloudflare Worker — Edge deployment, handles sessions, cheap
- Next.js API route — Simpler, but watch cold starts
- Separate Vercel project — Dedicated subdomain
Offer both:
- Remote (zero install, OAuth)
- Local (API key, air-gapped environments)
Phase 7: Polish & Future-Proofing
JSON-LD Structured Data
Add schema.org markup for AI search visibility:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"name": "YourSaaS",
"applicationCategory": "BusinessApplication",
"offers": {
"@type": "Offer",
"price": "29.00",
"priceCurrency": "USD"
}
}
</script>
"Build with AI" Documentation Page
Create a dedicated page linking to:
- MCP server setup
- OpenClaw skill
- TypeScript SDK
- OpenAPI spec
- llms.txt
Follow Stripe's lead: docs.stripe.com/building-with-llms
Key Takeaways
-
Start with
llms.txtand OpenAPI — highest ROI for effort - Write agent-oriented descriptions — "Use to..." not just "Get..."
- MCP server is table stakes — Claude Code and Cursor users expect it
- Document everything — Agents can't guess what your API does
- Stay current — The agent ecosystem moves fast; watch for new standards
Top comments (1)
Great breakdown of the layered approach. The llms.txt + OpenAPI combo is the right starting point — most SaaS products still don't even have a machine-readable entry point for agents to discover them.
One thing I'd add: discoverability is still a massive gap. You can have perfect llms.txt and OpenAPI specs, but if agents don't know your product exists, none of it matters. That's partly why directories that index SaaS tools with their agent-readiness metadata are becoming important. I've been tracking this space on saasrow.com — cataloging which tools have OpenAPI specs, MCP servers, etc. The ecosystem needs better discovery infrastructure alongside the integration layers you describe here.