You have probably heard of llms.txt. A growing number of sites — Anthropic, Stripe, Cloudflare, Vercel — place a Markdown file at /llms.txt that lists their important pages so LLMs can navigate their documentation. It is a content map: here are my important pages.
That is useful. But it stops there.
When an AI agent visits a site on behalf of a user — not to read docs, but to actually use the site — it has to figure out everything on its own: whether an API exists, how to authenticate, what the parameters mean, how to format the output. There is no standard way for the site owner to communicate any of this.
robots.txt says what crawlers may not touch. llms.txt links to important pages. Neither describes how to use the site as a machine.
The idea: context.txt
Place a context.txt file at your webroot:
https://yoursite.com/context.txt
An AI agent reading it immediately knows what your site is, where your APIs are, how to authenticate, and how to render your data — without the user having to explain any of it.
The format is plain Markdown. AI models parse it naturally, links are semantic and followable, and it renders correctly if a human navigates to the URL directly.
A minimal example
# Movie Database
A curated database of 80 well-known films spanning science fiction, action,
crime, comedy, horror, animation, and drama.
## APIs
- [Movie API](api/context.txt) — search and retrieve films by genre, decade, director, score, or tag
And api/context.txt:
# Movie Database API
Read-only JSON API for film data.
[← Movie Database](../context.txt)
## Base path
- `/api`
## Authentication
- **Required:** no
## Endpoints
### `GET /api/movies`
Returns a list of films. Supports `?genre=`, `?decade=`, `?director=`, `?min_score=`, `?tag=`, and `?q=` filters.
### `GET /api/movies/{slug}`
Returns full detail for one film including synopsis, tags, runtime, and IMDb score.
## Rate limiting
- 60 requests per minute per IP
That is enough for an AI agent to call the API intelligently, without the user knowing anything about the API structure.
How it works
The root context.txt is a navigation hub. It links to sub-files the agent follows on demand:
/context.txt ← start here: what is this site
/api/context.txt ← how to query the data
/style/context.txt ← how to render results as HTML
/skills/context.txt ← reusable task patterns for common queries
/mcp/context.txt ← MCP server endpoint and available tools
Every sub-file links back to the root. No central registry, no configuration — just files and links, the same way the web works.
Skills
A skills/context.txt describes reusable task patterns. Each skill defines a trigger — the kind of request it handles — and the steps to fulfil it: which API calls to make and how to present the results.
## hidden-gems
**Trigger:** user asks for underrated, overlooked, or lesser-known films
**Steps:**
1. GET /api/movies?min_score=7.4 — fetch films with a solid but not blockbuster score
2. Filter client-side to imdb_score ≤ 8.4 — exclude the very well-known titles
3. Sort by imdb_score descending
4. Present as a table: Title, Year, Director, Genre, Score, one-line synopsis
A user can ask "show me hidden gems" without knowing anything about the API.
Private APIs
For sites where even the API structure is sensitive, context.txt can sit behind authentication. The public root signals that private access exists; the server returns 401 when an agent requests a protected file without credentials.
/context.txt ← public: describes the site, hints at private access
/private/context.txt ← 401 without credentials
/private/api/context.txt ← full API description, also protected
Live demo
There is a working reference implementation at https://context-txt.onrender.com — a movie database with a read-only JSON API, a browser list view, and a full style guide, all described via context.txt.
Try this prompt in any AI tool with URL fetching enabled:
Read the context.txt at https://context-txt.onrender.com/context.txt — then show me all sci-fi films from the 1980s with an IMDb score above 8.0.
The agent reads the root context.txt, follows the link to api/context.txt to learn the filters, calls the API, then reads style/context.txt to render the results — all without the user explaining any of it.
For an HTML-rendered result with the site's actual styles:
Read https://context-txt.onrender.com/context.txt and https://context-txt.onrender.com/style/context.txt. Show me all sci-fi films from the 1980s with an IMDb score above 8.0 as a self-contained HTML page. Use the exact colours, badge design, and table layout from the style guide. Output only the HTML.
Note: URL fetching must be enabled in your AI tool. In chat interfaces this is often a setting or requires a paid plan — check your provider's documentation.
How it relates to existing standards
| Standard | Purpose | How context.txt relates |
|---|---|---|
robots.txt |
Tell crawlers what not to index | context.txt tells AI what to do and how |
llms.txt |
Link map of important pages for LLMs | context.txt goes further: APIs, auth, domain vocab |
agent-manifest.txt |
Permissions and allowed agent actions | Complementary — context.txt is the usage guide |
| OpenAPI | Full REST API description | context.txt is a lightweight entry point; can reference OpenAPI |
context.txt is complementary to these standards. A site can have llms.txt for content navigation and context.txt for API-oriented interaction.
Status and feedback
This is an early draft proposal. The spec, the reference implementation, and the example files are all on GitHub: https://github.com/aneck-lw/context-txt
I am looking for:
- Feedback on the format and the linking model
- Real-world use cases this would help with
- Anything that feels missing or overcomplicated
What do you think?
Top comments (0)