DEV Community

Cover image for agents.txt — a proposed web standard for AI agents
Jasper van Veen
Jasper van Veen

Posted on

agents.txt — a proposed web standard for AI agents

The web has robots.txt. It's been around since 1994, and it answers one question well: can you look at this?

AI agents don't just look. They book flights, submit forms, call APIs, authenticate as users, and transact on behalf of people. And there's no standard for any of it.

I've been thinking about this gap for a while, and last week I drafted a proposal: agents.txt.

The idea

Place a file at https://yourdomain.com/agents.txt. It tells agents what they can do, how to do it, and under what terms:

Site-Name: ExampleShop
Site-Description: Online marketplace for sustainable home goods.

Allow-Training: no
Allow-RAG: yes
Allow-Actions: no
Preferred-Interface: rest
API-Docs: https://api.exampleshop.com/openapi.json
MCP-Server: https://mcp.exampleshop.com

[Agent: *]
Allow: /products/*
Allow: /search
Disallow: /checkout

[Agent: verified-purchasing-agent]
Allow: /checkout
Auth-Required: yes
Auth-Method: oauth2
Allow-Actions: yes
Enter fullscreen mode Exit fullscreen mode

Why would agents comply?

Two reasons:

Self-interest. When a site advertises an MCP server or REST API, a well-built agent wants to use it — it's faster and more reliable than scraping HTML. Compliance isn't a favour; it's rational.

Legal posture. A published machine-readable policy makes ignoring it actionable. "You had a standard and ignored it" substantially strengthens CFAA and Computer Misuse Act arguments.

What it covers (that robots.txt does not)

Concern robots.txt agents.txt
Crawl permissions
Action permissions
API / MCP discovery
Training / RAG consent
Agent identity tiers
Auth methods

They're complementary — sites should have both.

Status

Draft v0.1, published under CC BY 4.0. I'd love feedback, pushback, and contributions.

github.com/jaspervanveen/agents-txt

Three open questions I'm genuinely unsure about:

  1. Agent identity verification — how do you prove an agent is who it claims to be?
  2. Should capabilities use a controlled vocabulary, or free-form strings?
  3. As MCP matures, how tightly should this integrate with it?

What do you think?

Top comments (0)