Our docs are built for humans. Who reads them now?
We're in the Agent Era. AI-first development isn't coming β it's here.
LLM agents consume your API documentation too. Every time an AI assistant helps a developer integrate your API, it reads your docs. But Swagger UI, Redoc, and traditional OpenAPI docs weren't built for that.
The real cost of human-first docs
Traditional API docs are a disaster for AI agents:
- Thousands of wasted tokens on navigation chrome, sidebars, and UI elements the LLM has to skip
-
Verbose JSON schemas with deeply nested
$refdefinitions that explode token counts - Duplicated descriptions repeated across endpoints
- HTML noise that buries the actual API semantics
Your AI-powered integrations are paying the cost β in latency, in token spend, and in accuracy.
What if you could just say: "Learn my API"?
Tell your AI agent: "Learn https://api.alloverapps.com"
That's the vision. The agent fetches llms.txt β a compact, token-optimized representation of your entire API β and understands it immediately. No bloat. No parsing overhead. Just clean, structured API knowledge ready for the Agent Era.
That's what swagent enables.
How swagent works
swagent converts your OpenAPI spec into 3 outputs simultaneously:
- π llms.txt β compact format for LLM consumption (~75% token reduction)
- π Markdown β human-readable API reference
- π HTML β shareable landing page for your API
A 65KB OpenAPI spec becomes ~16KB. That's the difference between an agent that nails your API in one shot vs. one that hallucinates endpoints halfway through.
The compact notation
Instead of verbose JSON schemas, swagent uses a notation designed for machines:
GET /pets
Summary: List all pets
Params: limit:number, status:available|pending|sold
Response: [{id*, name*, tag}]
POST /pets
Summary: Create a pet
Body: {name*, tag}
Response: {id*, name*, tag}
* = required Β· :type for non-strings Β· | for enums Β· [{...}] for arrays
Quick start
npm install swagent
// Fastify
import Fastify from 'fastify'
import swagent from 'swagent'
const app = Fastify()
await app.register(swagent)
app.listen({ port: 3000 })
// Your API now exposes /llms.txt, /docs.md and a landing page
Also works with: Express, Hono, Elysia, Koa, NestJS, Nitro/Nuxt and as a CLI tool.
Built on the llms.txt standard
swagent aligns with the llms.txt standard β the emerging convention for machine-readable content optimized for AI consumption. The pattern is simple: alongside your human docs, expose a version built for machines.
The Agent Era is already here. Your API docs should be ready for it.
- π Live playground: swagent.dev
- π¦ npm: npmjs.com/package/swagent
Are you building AI agents that consume APIs? How are you feeding API specs to your LLMs today?
Top comments (1)
Spot on. I had AI keep using offset pagination when we use cursor-based. Added three lines to my context file. Problem gone. Sometimes the fix is embarrassingly simple.