<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jasper van Veen</title>
    <description>The latest articles on DEV Community by Jasper van Veen (@jaspervanveen).</description>
    <link>https://dev.to/jaspervanveen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jaspervanveen"/>
    <language>en</language>
    <item>
      <title>agent-manifest.txt — a proposed web standard for AI agents (formerly agents.txt)</title>
      <dc:creator>Jasper van Veen</dc:creator>
      <pubDate>Fri, 13 Mar 2026 22:46:14 +0000</pubDate>
      <link>https://dev.to/jaspervanveen/agentstxt-a-proposed-web-standard-for-ai-agents-20lb</link>
      <guid>https://dev.to/jaspervanveen/agentstxt-a-proposed-web-standard-for-ai-agents-20lb</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Editor's note (March 2026):&lt;/strong&gt; This proposal has been renamed from &lt;code&gt;agents.txt&lt;/code&gt; to &lt;code&gt;agent-manifest.txt&lt;/code&gt;. The original name was chosen for its direct analogy to &lt;code&gt;robots.txt&lt;/code&gt;, but the &lt;code&gt;agents.txt&lt;/code&gt; namespace became crowded: an independent IETF Internet-Draft (&lt;code&gt;draft-srijal-agents-policy-00&lt;/code&gt;) had already claimed that filename, and multiple community projects were independently using it for different purposes. The new name more accurately reflects the document's purpose — a rich capability &lt;em&gt;manifest&lt;/em&gt; — while keeping a clean path toward formal standardisation. The GitHub repository, spec, and all references have been updated. The article text below preserves the original framing.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;The web has &lt;code&gt;robots.txt&lt;/code&gt;. It's been around since 1994, and it answers one question well: &lt;em&gt;can you look at this?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AI agents don't just look. They book flights, submit forms, call APIs, authenticate as users, and transact on behalf of people. And there's no standard for any of it.&lt;/p&gt;

&lt;p&gt;I've been thinking about this gap for a while, and last week I drafted a proposal: &lt;strong&gt;&lt;code&gt;agent-manifest.txt&lt;/code&gt;&lt;/strong&gt; (originally &lt;code&gt;agents.txt&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  The idea
&lt;/h2&gt;

&lt;p&gt;Place a file at &lt;code&gt;https://yourdomain.com/agent-manifest.txt&lt;/code&gt;. It tells agents what they can do, how to do it, and under what terms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="err"&gt;Site-Name:&lt;/span&gt; &lt;span class="err"&gt;ExampleShop&lt;/span&gt;
&lt;span class="err"&gt;Site-Description:&lt;/span&gt; &lt;span class="err"&gt;Online&lt;/span&gt; &lt;span class="err"&gt;marketplace&lt;/span&gt; &lt;span class="err"&gt;for&lt;/span&gt; &lt;span class="err"&gt;sustainable&lt;/span&gt; &lt;span class="err"&gt;home&lt;/span&gt; &lt;span class="err"&gt;goods.&lt;/span&gt;

&lt;span class="err"&gt;Allow-Training:&lt;/span&gt; &lt;span class="err"&gt;no&lt;/span&gt;
&lt;span class="err"&gt;Allow-RAG:&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;Allow-Actions:&lt;/span&gt; &lt;span class="err"&gt;no&lt;/span&gt;
&lt;span class="err"&gt;Preferred-Interface:&lt;/span&gt; &lt;span class="err"&gt;rest&lt;/span&gt;
&lt;span class="err"&gt;API-Docs:&lt;/span&gt; &lt;span class="err"&gt;https://api.exampleshop.com/openapi.json&lt;/span&gt;
&lt;span class="err"&gt;MCP-Server:&lt;/span&gt; &lt;span class="err"&gt;https://mcp.exampleshop.com&lt;/span&gt;

&lt;span class="nn"&gt;[Agent: *]&lt;/span&gt;
&lt;span class="err"&gt;Allow:&lt;/span&gt; &lt;span class="err"&gt;/products/*&lt;/span&gt;
&lt;span class="err"&gt;Allow:&lt;/span&gt; &lt;span class="err"&gt;/search&lt;/span&gt;
&lt;span class="err"&gt;Disallow:&lt;/span&gt; &lt;span class="err"&gt;/checkout&lt;/span&gt;

&lt;span class="nn"&gt;[Agent: verified-purchasing-agent]&lt;/span&gt;
&lt;span class="err"&gt;Allow:&lt;/span&gt; &lt;span class="err"&gt;/checkout&lt;/span&gt;
&lt;span class="err"&gt;Auth-Required:&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;span class="err"&gt;Auth-Method:&lt;/span&gt; &lt;span class="err"&gt;oauth2&lt;/span&gt;
&lt;span class="err"&gt;Allow-Actions:&lt;/span&gt; &lt;span class="err"&gt;yes&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why would agents comply?
&lt;/h2&gt;

&lt;p&gt;Two reasons:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-interest.&lt;/strong&gt; When a site advertises an MCP server or REST API, a well-built agent &lt;em&gt;wants&lt;/em&gt; to use it - it's faster and more reliable than scraping HTML. Compliance isn't a favour; it's rational.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legal posture.&lt;/strong&gt; A published machine-readable policy makes ignoring it actionable. "You had a standard and ignored it" substantially strengthens CFAA and Computer Misuse Act arguments.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it covers (that &lt;code&gt;robots.txt&lt;/code&gt; does not)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concern&lt;/th&gt;
&lt;th&gt;robots.txt&lt;/th&gt;
&lt;th&gt;agent-manifest.txt&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Crawl permissions&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Action permissions&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API / MCP discovery&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Training / RAG consent&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent identity tiers&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Auth methods&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;They're complementary - sites should have both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Status
&lt;/h2&gt;

&lt;p&gt;Draft v0.3.0, published under CC BY 4.0. I'd love feedback, pushback, and contributions.&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/jaspervanveen/agents-txt" rel="noopener noreferrer"&gt;https://github.com/jaspervanveen/agents-txt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Three open questions I'm genuinely unsure about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Agent identity verification - how do you prove an agent is who it claims to be?&lt;/li&gt;
&lt;li&gt;Should capabilities use a controlled vocabulary, or free-form strings?&lt;/li&gt;
&lt;li&gt;As MCP matures, how tightly should this integrate with it?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What do you think?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>standard</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
