<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shubham Gupta</title>
    <description>The latest articles on DEV Community by Shubham Gupta (@unlikefraction).</description>
    <link>https://dev.to/unlikefraction</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/unlikefraction"/>
    <language>en</language>
    <item>
      <title>Why 96% of the Web Is Invisible to AI Agents (And What You Can Do About It)</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Thu, 26 Mar 2026 11:12:50 +0000</pubDate>
      <link>https://dev.to/unlikefraction/why-96-of-the-web-is-invisible-to-ai-agents-and-what-you-can-do-about-it-34bn</link>
      <guid>https://dev.to/unlikefraction/why-96-of-the-web-is-invisible-to-ai-agents-and-what-you-can-do-about-it-34bn</guid>
      <description>&lt;p&gt;Your website looks great. Clean design, fast load times, maybe even a perfect Lighthouse score.&lt;/p&gt;

&lt;p&gt;But to an AI agent? It might as well not exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI agents don't have eyes
&lt;/h2&gt;

&lt;p&gt;Here's something most developers haven't internalized yet: AI agents don't open a browser. They don't render your beautiful CSS. They don't execute your JavaScript.&lt;/p&gt;

&lt;p&gt;They send an HTTP request. They get back HTML (hopefully). They try to parse it into something meaningful.&lt;/p&gt;

&lt;p&gt;That's it. No clicking cookie banners. No waiting for React to hydrate. No solving CAPTCHAs.&lt;/p&gt;

&lt;p&gt;And this is where things fall apart for most of the web.&lt;/p&gt;

&lt;h2&gt;
  
  
  We tested 834 websites. The results were brutal.
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;SiliconFriendly&lt;/a&gt;, we built a scoring system (L0 to L5) that rates how well a website works with AI agents. Then we pointed it at 834 real websites.&lt;/p&gt;

&lt;p&gt;The breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;50.4%&lt;/strong&gt; scored L0-L1 — essentially hostile to agents. Blocked crawlers, no structured data, JavaScript-only rendering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;35.2%&lt;/strong&gt; scored L2-L3 — partially accessible, but missing key pieces.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;13.6%&lt;/strong&gt; scored L4 — good, with structured APIs or solid metadata.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;0.8%&lt;/strong&gt; scored L5 — fully agent-friendly. That's 7 websites out of 834.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let that sink in. &lt;strong&gt;96% of the websites we tested are partially or fully invisible to AI agents.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually blocks AI agents?
&lt;/h2&gt;

&lt;p&gt;It's not one thing. It's a stack of problems:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. JavaScript-rendered content.&lt;/strong&gt; If your content only exists after JS executes, agents see an empty page. SPAs are the worst offenders. Your React app with client-side rendering? To an AI agent, it's a blank &lt;code&gt;&amp;lt;div id="root"&amp;gt;&amp;lt;/div&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Aggressive bot blocking.&lt;/strong&gt; CAPTCHAs, Cloudflare challenges, rate limiting — all designed to stop bots. They work. They stop AI agents too. The irony: you're blocking the same AI systems that could be recommending your product to millions of users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. No structured data.&lt;/strong&gt; Schema.org markup, OpenGraph tags, clean semantic HTML — these are how agents understand what your page is &lt;em&gt;about&lt;/em&gt;. Without them, agents are guessing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. No &lt;code&gt;llms.txt&lt;/code&gt;.&lt;/strong&gt; This is the new &lt;code&gt;robots.txt&lt;/code&gt; for the AI era. A simple file at your domain root that tells AI agents what your site does, what content matters, and how to interact with it. Almost nobody has one yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. No machine-readable APIs.&lt;/strong&gt; If the only way to get data from your site is to scrape HTML, you're making agents work ten times harder than they need to.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes a website agent-friendly?
&lt;/h2&gt;

&lt;p&gt;The websites that scored L4-L5 had a few things in common:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They serve real HTML.&lt;/strong&gt; Server-side rendering or static generation. Content is in the initial response, not loaded by JavaScript after the fact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They have an &lt;code&gt;llms.txt&lt;/code&gt; file.&lt;/strong&gt; A simple, markdown-formatted file that tells AI systems: here's who we are, here's what we offer, here's our most important content. Think of it as a cover letter for AI agents visiting your site.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They expose structured data.&lt;/strong&gt; JSON-LD schema markup on key pages. Not just basic &lt;code&gt;Organization&lt;/code&gt; schema — product info, FAQs, how-to guides, all marked up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They have documented APIs.&lt;/strong&gt; Even a simple REST API with clear docs gives agents a clean way to interact with your service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They don't block AI crawlers by default.&lt;/strong&gt; Check your &lt;code&gt;robots.txt&lt;/code&gt;. If you're blocking GPTBot, ClaudeBot, or similar user agents, you're opting out of the AI-powered web.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters now
&lt;/h2&gt;

&lt;p&gt;AI agents are going to be a massive traffic and conversion channel. Not next year — &lt;em&gt;now&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When someone asks ChatGPT "what's the best project management tool for small teams?", the answer comes from somewhere. If your website is invisible to the systems generating that answer, you don't exist in that conversation.&lt;/p&gt;

&lt;p&gt;This isn't theoretical. Companies are already seeing measurable traffic from AI referrals. The ones that show up are the ones whose sites are actually parseable by these systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check your own site
&lt;/h2&gt;

&lt;p&gt;We built &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;SiliconFriendly&lt;/a&gt; specifically for this. Punch in your URL and get an instant score (L0-L5) with specific recommendations.&lt;/p&gt;

&lt;p&gt;It checks your &lt;code&gt;llms.txt&lt;/code&gt;, structured data, JavaScript dependency, crawler accessibility, and more. Takes about 30 seconds.&lt;/p&gt;

&lt;p&gt;The fix isn't hard for most sites. Add an &lt;code&gt;llms.txt&lt;/code&gt;. Add some Schema.org markup. Make sure your critical content is in the HTML response. Unblock AI crawlers in &lt;code&gt;robots.txt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The web is splitting into two: sites that AI agents can work with, and sites that are invisible to them. Right now, 96% of sites are on the wrong side.&lt;/p&gt;

&lt;p&gt;Don't be part of the 96%.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Check your site's AI-agent readiness at &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;siliconfriendly.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>agents</category>
      <category>llm</category>
    </item>
    <item>
      <title>How to make your website AI-agent friendly in 30 minutes</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Fri, 20 Mar 2026 12:34:55 +0000</pubDate>
      <link>https://dev.to/unlikefraction/how-to-make-your-website-ai-agent-friendly-in-30-minutes-3982</link>
      <guid>https://dev.to/unlikefraction/how-to-make-your-website-ai-agent-friendly-in-30-minutes-3982</guid>
      <description>&lt;p&gt;AI agents are the new browsers. They're crawling, reading, and trying to interact with your site right now — and &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;over 50% of websites are basically unusable for them&lt;/a&gt;. If you want agents to actually work with your site, here's how to fix that in 30 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Add llms.txt (5 min)
&lt;/h2&gt;

&lt;p&gt;Create a &lt;code&gt;/llms.txt&lt;/code&gt; file at your site root. This is a plain-text file that tells AI agents what your site does and how to use it. Think of it as &lt;code&gt;robots.txt&lt;/code&gt;, but instead of saying "don't crawl this," you're saying "here's how to understand me."&lt;/p&gt;

&lt;p&gt;The spec lives at &lt;a href="https://llmstxt.org" rel="noopener noreferrer"&gt;llmstxt.org&lt;/a&gt;. Here's what a good one looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Acme API

&amp;gt; Acme provides a REST API for managing invoices and payments.

## Docs

- [API Reference](/docs/api): Full endpoint documentation
- [Authentication](/docs/auth): How to authenticate requests
- [Webhooks](/docs/webhooks): Event notification setup

## Key Endpoints

- POST /api/invoices - Create a new invoice
- GET /api/invoices/{id} - Retrieve invoice details
- GET /api/customers - List customers

## Notes

- All endpoints return JSON
- Rate limit: 100 requests/minute
- Auth: Bearer token in Authorization header
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Drop this at &lt;code&gt;yourdomain.com/llms.txt&lt;/code&gt;. That's it. Agents that support the spec will read it before trying to parse your HTML.&lt;/p&gt;

&lt;p&gt;You can also add a &lt;code&gt;/llms-full.txt&lt;/code&gt; with your complete documentation in markdown — useful for agents that want to ingest everything at once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Add structured data (10 min)
&lt;/h2&gt;

&lt;p&gt;Agents parse your HTML, but they understand structured data &lt;em&gt;much&lt;/em&gt; better. The fastest win is adding JSON-LD markup to your pages using &lt;a href="https://schema.org" rel="noopener noreferrer"&gt;schema.org&lt;/a&gt; vocabulary.&lt;/p&gt;

&lt;p&gt;For a SaaS product page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"application/ld+json"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@context&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://schema.org&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SoftwareApplication&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Acme Invoicing&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;description&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Automated invoice management for small businesses&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;applicationCategory&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;BusinessApplication&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;operatingSystem&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Web&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;offers&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Offer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;price&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;29.00&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;priceCurrency&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;USD&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;url&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://acme.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;documentation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://acme.com/docs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For a blog post, use &lt;code&gt;Article&lt;/code&gt;. For an API, use &lt;code&gt;WebAPI&lt;/code&gt;. For a business, use &lt;code&gt;Organization&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Other quick wins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use semantic HTML&lt;/strong&gt;: &lt;code&gt;&amp;lt;nav&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;main&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;article&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;header&amp;gt;&lt;/code&gt; — not &lt;code&gt;&amp;lt;div class="nav-wrapper-v2"&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add descriptive meta tags&lt;/strong&gt;: &lt;code&gt;&amp;lt;meta name="description"&amp;gt;&lt;/code&gt; actually matters again.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use clear heading hierarchy&lt;/strong&gt;: Agents use &lt;code&gt;h1&lt;/code&gt;→&lt;code&gt;h6&lt;/code&gt; to build a content tree, just like screen readers do.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Make your API agent-accessible (10 min)
&lt;/h2&gt;

&lt;p&gt;If you have an API, three things make it dramatically easier for agents to use:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add an OpenAPI spec.&lt;/strong&gt; Host it at a known location like &lt;code&gt;/openapi.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"openapi"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"info"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Acme API"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"paths"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"/api/invoices"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"post"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"summary"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Create an invoice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"requestBody"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"required"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"application/json"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"schema"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"object"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"required"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"customer_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="nl"&gt;"customer_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="nl"&gt;"amount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"number"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Return structured JSON errors&lt;/strong&gt;, not HTML error pages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"invalid_request"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"customer_id is required"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agents can't parse your pretty 404 page. They need machine-readable errors to self-correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document rate limits&lt;/strong&gt; in response headers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1710892800
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't have an API, consider adding a single read-only endpoint — even a &lt;code&gt;/api/info.json&lt;/code&gt; that returns basic metadata about your site is useful for agent discovery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Check your score (5 min)
&lt;/h2&gt;

&lt;p&gt;Go to &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;siliconfriendly.com/check&lt;/a&gt; and enter your domain. It evaluates your site across 30 criteria on an L0–L5 scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;L0&lt;/strong&gt; – No agent support (most sites today)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L1&lt;/strong&gt; – Basic: has llms.txt or structured metadata&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L2&lt;/strong&gt; – Navigable: semantic HTML, clear content hierarchy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L3&lt;/strong&gt; – Interactable: API with OpenAPI spec, structured errors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L4&lt;/strong&gt; – Integrated: agent.json, A2A support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;L5&lt;/strong&gt; – Fully agent-native&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most sites start at L0. Following steps 1–3 above should get you to L2 or L3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Add agent.json
&lt;/h2&gt;

&lt;p&gt;For agent-to-agent (A2A) discovery, you can add a &lt;code&gt;/.well-known/agent.json&lt;/code&gt; file. This tells other AI agents what your agent (or service) can do and how to communicate with it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Acme Invoicing Agent"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Manages invoices and payments"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://acme.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"capabilities"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"create_invoice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lookup_customer"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"protocol"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"auth"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"bearer"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is newer and less standardized, but the &lt;code&gt;/.well-known/&lt;/code&gt; convention is gaining traction. If you're building anything with agent interoperability in mind, it's worth adding now.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;What&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;llms.txt&lt;/td&gt;
&lt;td&gt;5 min&lt;/td&gt;
&lt;td&gt;Tell agents what your site does&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structured data&lt;/td&gt;
&lt;td&gt;10 min&lt;/td&gt;
&lt;td&gt;JSON-LD + semantic HTML&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API access&lt;/td&gt;
&lt;td&gt;10 min&lt;/td&gt;
&lt;td&gt;OpenAPI spec + JSON errors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Check score&lt;/td&gt;
&lt;td&gt;5 min&lt;/td&gt;
&lt;td&gt;siliconfriendly.com&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The bar is low right now — most sites do none of this. Spending 30 minutes puts you ahead of half the web.&lt;/p&gt;

&lt;p&gt;Check where you stand: &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;siliconfriendly.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Your AI Agents Are Flying Blind. Here's a Pre-Flight Check.</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Sat, 28 Feb 2026 14:14:15 +0000</pubDate>
      <link>https://dev.to/unlikefraction/your-ai-agents-are-flying-blind-heres-a-pre-flight-check-206e</link>
      <guid>https://dev.to/unlikefraction/your-ai-agents-are-flying-blind-heres-a-pre-flight-check-206e</guid>
      <description>&lt;p&gt;Your agent hits a 403. It retries. Gets a CAPTCHA. Retries again. Finally gets some HTML back — 200KB of JavaScript-rendered noise it burns 4,000 tokens trying to parse before concluding it can't extract anything useful.&lt;/p&gt;

&lt;p&gt;You've been there. I've been there. Every agent builder has been there.&lt;/p&gt;

&lt;p&gt;The problem isn't your agent. It's that &lt;strong&gt;there's no way to know if a website will cooperate before your agent wastes time and tokens on it.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spectrum of Agent-Friendliness
&lt;/h2&gt;

&lt;p&gt;Not all websites treat agents the same. Some have structured APIs, machine-readable docs, and &lt;code&gt;llms.txt&lt;/code&gt; files. Others serve you a Cloudflare challenge page and call it a day.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt; rates 834+ websites on a scale from L0 to L5:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;What it means&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Hostile — blocks agents outright&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Passive — no agent support, you're scraping&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Basic — has an API, but docs are human-only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Structured — good APIs, machine-readable content&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Verified — strong agent support across 30 criteria&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agent-native — built for programmatic access first&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each rating is based on 30 concrete criteria: structured error responses, rate limit headers, auth documentation, machine-readable pricing, &lt;code&gt;llms.txt&lt;/code&gt; support, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wire It Into Your Agent in 5 Minutes
&lt;/h2&gt;

&lt;p&gt;Silicon Friendly exposes an &lt;a href="https://siliconfriendly.com/mcp" rel="noopener noreferrer"&gt;MCP server&lt;/a&gt; your agents can query at runtime. If you're using &lt;strong&gt;CrewAI&lt;/strong&gt; (which supports MCP natively), here's the setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;crewai crewai-tools[mcp]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;crewai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Crew&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;crewai_tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MCPServerAdapter&lt;/span&gt;

&lt;span class="n"&gt;sf_server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://siliconfriendly.com/mcp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;transport&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;streamable_http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;MCPServerAdapter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sf_server&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;connect_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;scout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Integration Scout&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;goal&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Evaluate services for agent compatibility before integration&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;backstory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You check whether websites and APIs are built for AI agents. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You use Silicon Friendly to look up agent-friendliness ratings &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;and search for alternatives when a service scores poorly.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;We need to integrate a payment processor. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Check if stripe.com is agent-friendly. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Then search for other payment processors and compare.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;expected_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Stripe&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s agent-friendliness level, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;alternatives with their levels, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;and a recommendation.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;scout&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;crew&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Crew&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;scout&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;tasks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;crew&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;kickoff&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent calls &lt;code&gt;check_agent_friendliness&lt;/code&gt; on stripe.com (L4), then &lt;code&gt;search_websites&lt;/code&gt; for "payment processing" to find alternatives like Razorpay (L5) and Square Developer (L5). It makes an informed recommendation instead of guessing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Tools Available via MCP
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;check_agent_friendliness&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Quick L0-L5 check for any domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;search_websites&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Semantic search by use case ("email API", "cloud storage")&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;get_website&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Full 30-criteria breakdown for a domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;get_levels_info&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Explains the rating system&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;No auth required for reads. The MCP endpoint is &lt;code&gt;https://siliconfriendly.com/mcp&lt;/code&gt; using Streamable HTTP transport.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works with smolagents Too
&lt;/h2&gt;

&lt;p&gt;If you're using HuggingFace's &lt;strong&gt;smolagents&lt;/strong&gt; instead of CrewAI, the same MCP server works — smolagents has native MCP support. Any framework that speaks MCP can connect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Advice
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;L3+ is the sweet spot.&lt;/strong&gt; Anything rated L3 or above has structured data, clear APIs, and machine-readable content. Below that, your agent is scraping HTML and guessing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use search instead of hardcoding.&lt;/strong&gt; Instead of assuming which service to use, let your agent search for "email API" or "cloud storage" and pick the highest-rated option. The directory has 834+ websites indexed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check the details when it matters.&lt;/strong&gt; &lt;code&gt;get_website&lt;/code&gt; returns the full 30-criteria breakdown — you'll see exactly what works (structured error responses, rate limit headers) and what doesn't (no machine-readable pricing, no agent auth flow).&lt;/p&gt;

&lt;h2&gt;
  
  
  Check Your Own Stack
&lt;/h2&gt;

&lt;p&gt;Before your next agent build, check whether the services you depend on will actually cooperate:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://siliconfriendly.com/llms.txt" rel="noopener noreferrer"&gt;siliconfriendly.com/llms.txt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If a site you use isn't rated yet, you can submit it through the MCP server or the website. You get bonus API queries for each site you verify.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Silicon Friendly is an open directory. The MCP server is free to use. Registry entry: &lt;code&gt;com.siliconfriendly/directory&lt;/code&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>mcp</category>
      <category>agents</category>
    </item>
    <item>
      <title>How AI agents browse the web (and why your site might be invisible to them)</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Fri, 27 Feb 2026 07:53:49 +0000</pubDate>
      <link>https://dev.to/unlikefraction/how-ai-agents-browse-the-web-and-why-your-site-might-be-invisible-to-them-3k82</link>
      <guid>https://dev.to/unlikefraction/how-ai-agents-browse-the-web-and-why-your-site-might-be-invisible-to-them-3k82</guid>
      <description>&lt;h1&gt;
  
  
  How AI agents browse the web (and why your site might be invisible to them)
&lt;/h1&gt;

&lt;p&gt;Most sites are built for humans sitting in front of a browser. That's fine, except AI agents aren't humans, and they don't use browsers.&lt;/p&gt;

&lt;p&gt;When an agent needs to interact with your product, it sends an HTTP request, reads the response, and tries to extract something useful. No JavaScript engine, no mouse clicks, no scroll events. If your content only exists after a React component mounts, the agent sees an empty shell. If your site is behind Cloudflare's bot challenge, the agent never gets past the waiting room.&lt;/p&gt;

&lt;h2&gt;
  
  
  How an agent actually 'sees' a website
&lt;/h2&gt;

&lt;p&gt;The sequence is roughly this: HTTP GET to a URL, parse the HTML or JSON response, check for an &lt;code&gt;llms.txt&lt;/code&gt; file at the root, and look for any documented API it can call. That's the whole pipeline.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;llms.txt&lt;/code&gt; is a simple convention, similar in spirit to &lt;code&gt;robots.txt&lt;/code&gt;, that gives agents a curated map of what your site contains and how to use it. If you haven't heard of it, that's part of the problem. Most sites don't have one. Agents fall back to guessing, and guessing usually fails.&lt;/p&gt;

&lt;h2&gt;
  
  
  The three ways it breaks
&lt;/h2&gt;

&lt;p&gt;In our data, agent failures almost always fall into one of three categories.&lt;/p&gt;

&lt;p&gt;Access problems come first. Your &lt;code&gt;robots.txt&lt;/code&gt; blocks known AI crawlers. Cloudflare fingerprints the client and serves a challenge page instead of content. Some sites block &lt;code&gt;ClaudeBot&lt;/code&gt; in &lt;code&gt;robots.txt&lt;/code&gt; while simultaneously using the Claude API in their own backend. That's incoherent, and it's more common than you'd expect.&lt;/p&gt;

&lt;p&gt;Parsing problems come second. The page loads fine, technically, but all the meaningful content is rendered client-side. A simple HTTP client gets back a &lt;code&gt;&amp;lt;div id="root"&amp;gt;&amp;lt;/div&amp;gt;&lt;/code&gt; and nothing else. Navigation, pricing, documentation, none of it visible. 50.4% of the sites we've analyzed are in this state: discoverable, but useless to an agent.&lt;/p&gt;

&lt;p&gt;Action problems come third. Even if an agent can read your site, there's nothing it can actually do. No API. No structured way to initiate a trial, submit a form, or retrieve account data. The agent can look, but not touch.&lt;/p&gt;

&lt;h2&gt;
  
  
  The spectrum from broken to useful
&lt;/h2&gt;

&lt;p&gt;Think of it as levels. At L0, your site is actively blocking agents. At L1, it's accessible but the content is unreadable. L2 is readable but static, no way to act. L3 means you have an API, but it's not documented in a way agents can discover. L4 is a documented, agent-accessible API with an &lt;code&gt;llms.txt&lt;/code&gt; pointing to it. L5 adds things like OAuth flows and structured error messages that make autonomous operation reliable.&lt;/p&gt;

&lt;p&gt;Only 4.2% of sites we've seen are at L4 or above. Most are stuck at L2.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you can add in a week
&lt;/h2&gt;

&lt;p&gt;None of this requires a rewrite. A few specific additions get you most of the way.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add or update &lt;code&gt;robots.txt&lt;/code&gt; to allow the major AI crawlers: &lt;code&gt;GPTBot&lt;/code&gt;, &lt;code&gt;ClaudeBot&lt;/code&gt;, &lt;code&gt;PerplexityBot&lt;/code&gt;, &lt;code&gt;Googlebot-Extended&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create &lt;code&gt;llms.txt&lt;/code&gt; at your domain root with a plain-language description of what your product does and links to your API docs&lt;/li&gt;
&lt;li&gt;Add JSON-LD structured data to your homepage, pricing page, and any key product pages, &lt;code&gt;Product&lt;/code&gt;, &lt;code&gt;SoftwareApplication&lt;/code&gt;, or &lt;code&gt;WebPage&lt;/code&gt; schemas work&lt;/li&gt;
&lt;li&gt;Make sure your core documentation is server-rendered or available as static HTML, not just client-side rendered&lt;/li&gt;
&lt;li&gt;If you have a public API, document it in OpenAPI format and link to it from &lt;code&gt;llms.txt&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not difficult changes. Most can be done in an afternoon. The payoff is that agents can actually use your product when they're trying to help someone who needs what you offer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check where you stand
&lt;/h2&gt;

&lt;p&gt;If you want to see how your site scores across these levels, run it through &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;siliconfriendly.com&lt;/a&gt;. It'll tell you exactly which layer you're blocked at and what to fix.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://siliconfriendly.com/blog/how-ai-agents-browse-the-web/" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>webdev</category>
      <category>api</category>
    </item>
    <item>
      <title>Is GPTBot blocked on your site? Here's what that means for AI agents</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Fri, 27 Feb 2026 07:53:14 +0000</pubDate>
      <link>https://dev.to/unlikefraction/is-gptbot-blocked-on-your-site-heres-what-that-means-for-ai-agents-56l</link>
      <guid>https://dev.to/unlikefraction/is-gptbot-blocked-on-your-site-heres-what-that-means-for-ai-agents-56l</guid>
      <description>&lt;h1&gt;
  
  
  Is GPTBot blocked on your site? How to check and fix it in 5 minutes
&lt;/h1&gt;

&lt;p&gt;Right now, at least a dozen AI crawlers are probably hitting your site. Some are scraping your content to train models. Others are fetching pages in real time so ChatGPT or Perplexity can answer questions about you. Most site owners have no idea which ones they're allowing and which ones they've accidentally blocked.&lt;/p&gt;

&lt;p&gt;Your &lt;code&gt;robots.txt&lt;/code&gt; file controls all of this. And there's a good chance yours is either wide open or blocking the wrong things.&lt;/p&gt;

&lt;p&gt;Here's how to check and fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The bots you should know about
&lt;/h2&gt;

&lt;p&gt;Not all AI crawlers do the same thing. Some collect training data. Others fetch your pages live when a user asks a question. That difference matters, because you might want to block one but not the other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training bots&lt;/strong&gt; scrape your content to build or fine-tune AI models. Your pages get absorbed into the model's weights. You don't get a link, attribution, or traffic in return.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User-agent bots&lt;/strong&gt; fetch your pages in real time when someone asks an AI tool a question. These work more like search engines - they can send you traffic, and they usually link back to your site.&lt;/p&gt;

&lt;p&gt;Here's the list that actually matters:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Bot&lt;/th&gt;
&lt;th&gt;Company&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;GPTBot&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;Training data collection + powers ChatGPT's web browsing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OAI-SearchBot&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;Fetches pages for SearchGPT results&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ChatGPT-User&lt;/td&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;ChatGPT browsing plugin, real-time page fetches&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ClaudeBot&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Training data collection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;anthropic-ai&lt;/td&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;Older training crawler, still active&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Google-Extended&lt;/td&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Gemini training data. This is NOT Googlebot - blocking it won't affect your search rankings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Googlebot&lt;/td&gt;
&lt;td&gt;Google&lt;/td&gt;
&lt;td&gt;Regular search indexing. Don't block this unless you want to disappear from Google&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CCBot&lt;/td&gt;
&lt;td&gt;Common Crawl&lt;/td&gt;
&lt;td&gt;Open dataset used by many AI labs for training&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bytespider&lt;/td&gt;
&lt;td&gt;ByteDance&lt;/td&gt;
&lt;td&gt;TikTok's parent company, training data collection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PerplexityBot&lt;/td&gt;
&lt;td&gt;Perplexity AI&lt;/td&gt;
&lt;td&gt;Real-time search, links back to sources&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Applebot-Extended&lt;/td&gt;
&lt;td&gt;Apple&lt;/td&gt;
&lt;td&gt;Apple Intelligence training. Separate from the regular Applebot used for Siri/Spotlight&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;GPTBot gets the most attention, but it's only one of many. If you block GPTBot and leave CCBot wide open, your content is still ending up in training datasets.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to check what you're currently blocking
&lt;/h2&gt;

&lt;p&gt;Open your browser and go to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://yourdomain.com/robots.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. The file is public. You'll see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User-agent: *
Disallow: /admin/
Disallow: /private/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you don't see any AI bot names in there, you're allowing all of them. If you see &lt;code&gt;User-agent: GPTBot&lt;/code&gt; with a &lt;code&gt;Disallow: /&lt;/code&gt;, that one is blocked.&lt;/p&gt;

&lt;p&gt;Most sites I check fall into one of two buckets: they either block nothing, or they block GPTBot specifically because someone read a headline about it in 2023 and forgot about the other ten crawlers doing the exact same thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training vs. browsing - why it matters
&lt;/h2&gt;

&lt;p&gt;Here's the decision most people skip: do you actually want AI tools to be able to reference your content?&lt;/p&gt;

&lt;p&gt;If you run a recipe blog and someone asks ChatGPT "how do I make sourdough starter," you probably want ChatGPT-User and OAI-SearchBot to be able to fetch your page and link to it. That's traffic. What you probably don't want is GPTBot scraping your entire archive so the model can answer sourdough questions without ever sending anyone to your site.&lt;/p&gt;

&lt;p&gt;The distinction is between bots that take your content for training and bots that bring users to your content in real time.&lt;/p&gt;

&lt;p&gt;Block the training bots. Think twice before blocking the browsing ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Copy-paste robots.txt configs
&lt;/h2&gt;

&lt;p&gt;Pick the scenario that fits your situation. Add the relevant lines to the end of your existing &lt;code&gt;robots.txt&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option A: Block AI training, allow AI search (recommended for most sites)
&lt;/h3&gt;

&lt;p&gt;This blocks bots that scrape for training while still letting AI search tools reference your pages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Block AI training crawlers
User-agent: GPTBot
Disallow: /

User-agent: ClaudeBot
Disallow: /

User-agent: anthropic-ai
Disallow: /

User-agent: Google-Extended
Disallow: /

User-agent: CCBot
Disallow: /

User-agent: Bytespider
Disallow: /

User-agent: Applebot-Extended
Disallow: /

# Allow AI search/browsing bots
User-agent: OAI-SearchBot
Allow: /

User-agent: ChatGPT-User
Allow: /

User-agent: PerplexityBot
Allow: /
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option B: Block everything AI-related
&lt;/h3&gt;

&lt;p&gt;If you don't want any AI system crawling your site, training or otherwise:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Block all AI crawlers
User-agent: GPTBot
Disallow: /

User-agent: OAI-SearchBot
Disallow: /

User-agent: ChatGPT-User
Disallow: /

User-agent: ClaudeBot
Disallow: /

User-agent: anthropic-ai
Disallow: /

User-agent: Google-Extended
Disallow: /

User-agent: CCBot
Disallow: /

User-agent: Bytespider
Disallow: /

User-agent: PerplexityBot
Disallow: /

User-agent: Applebot-Extended
Disallow: /
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option C: Allow everything
&lt;/h3&gt;

&lt;p&gt;If you want maximum AI visibility and don't mind your content being used for training:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Allow all AI crawlers
User-agent: GPTBot
Allow: /

User-agent: ClaudeBot
Allow: /

User-agent: CCBot
Allow: /

User-agent: Google-Extended
Allow: /
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your &lt;code&gt;robots.txt&lt;/code&gt; doesn't mention a bot at all, it's allowed by default. So Option C is really just making your intent explicit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The catch: robots.txt is voluntary
&lt;/h2&gt;

&lt;p&gt;robots.txt is a protocol, not a firewall. It's a polite request. Well-behaved bots respect it. The major companies - OpenAI, Google, Anthropic, Apple - do follow robots.txt directives. But nothing technically prevents a crawler from ignoring the file entirely.&lt;/p&gt;

&lt;p&gt;Some lesser-known crawlers don't check robots.txt at all. If you need hard enforcement, you'll need server-side blocking by user-agent string or IP range. Cloudflare and several other CDNs offer bot management tools that can handle this. But for most sites, robots.txt covers the crawlers that matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blocking is step one. Being agent-friendly is the bigger move.
&lt;/h2&gt;

&lt;p&gt;Controlling which bots can access your site is the defensive play. It's worth doing, but it's only part of the picture.&lt;/p&gt;

&lt;p&gt;The sites that will get the most out of AI - not just search engines, but AI agents that browse, compare, and buy on behalf of users - are the ones that make themselves easy for machines to understand. That means structured data, clean APIs, and files like &lt;code&gt;llms.txt&lt;/code&gt; that tell AI systems what your site is about and how to use it.&lt;/p&gt;

&lt;p&gt;If you want to see where your site stands on all of this, &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt; rates websites across 30 criteria on a scale from L0 to L5 for agent-friendliness. It checks robots.txt configuration, but also structured data, API availability, and a lot more. Worth a look if you've just fixed your robots.txt and want to know what else you might be missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick checklist
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Visit &lt;code&gt;yourdomain.com/robots.txt&lt;/code&gt; right now&lt;/li&gt;
&lt;li&gt;Check which AI bots are mentioned (probably none or just GPTBot)&lt;/li&gt;
&lt;li&gt;Decide: do you want to block training, browsing, or both?&lt;/li&gt;
&lt;li&gt;Copy the relevant config above and add it to your file&lt;/li&gt;
&lt;li&gt;Verify the change by revisiting the URL&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Five minutes. That's all it takes to go from "I have no idea what's crawling my site" to having a clear, intentional policy about it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://siliconfriendly.com/blog/gptbot-blocked-what-it-means/" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>webdev</category>
      <category>marketing</category>
    </item>
    <item>
      <title>What is llms.txt and does your SaaS website need one?</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Fri, 27 Feb 2026 07:52:38 +0000</pubDate>
      <link>https://dev.to/unlikefraction/what-is-llmstxt-and-does-your-saas-website-need-one-1mjh</link>
      <guid>https://dev.to/unlikefraction/what-is-llmstxt-and-does-your-saas-website-need-one-1mjh</guid>
      <description>&lt;h1&gt;
  
  
  What is llms.txt and does your SaaS website need one?
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What is llms.txt
&lt;/h2&gt;

&lt;p&gt;llms.txt is a proposed web standard for telling AI agents about your website. You put a plain-text markdown file at &lt;code&gt;yourdomain.com/llms.txt&lt;/code&gt;, and it describes your site's content hierarchy in a format that's easy for language models to parse. Think of it as a plain-language map of your site, written for machines that read, not crawl.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it differs from robots.txt
&lt;/h2&gt;

&lt;p&gt;robots.txt is about permissions. It tells crawlers what they're allowed to access. llms.txt is about structure and meaning. It doesn't block or allow anything. It just gives AI agents context: what your site is, what pages matter, and how things are organized. Two different jobs.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it actually looks like
&lt;/h2&gt;

&lt;p&gt;Here's a minimal example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Acme Analytics

&amp;gt; Acme Analytics helps SaaS teams track product usage and reduce churn.

## Docs
- [Getting started](https://acme.com/docs/start): How to install and configure Acme
- [API reference](https://acme.com/docs/api): Full API documentation
- [Integrations](https://acme.com/docs/integrations): Connect with Segment, Mixpanel, and others

## About
- [Pricing](https://acme.com/pricing): Plans and pricing
- [Blog](https://acme.com/blog): Product updates and guides
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. A short description, then organized links with one-line summaries. No special syntax to learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does it actually change anything for AI agents today
&lt;/h2&gt;

&lt;p&gt;Somewhat. Claude has adopted the standard and uses llms.txt when it's present. Other major models are catching up but aren't there yet. So right now you're mostly optimizing for Claude users, which is a real and growing segment. The spec is being actively developed, and adoption is moving. I'd rather have it in place now than scramble later.&lt;/p&gt;

&lt;p&gt;At Silicon Friendly, we've analyzed 832 websites and only 30.8% have llms.txt. That's a low bar to clear, and clearing it matters for how AI agents interact with your product.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create one
&lt;/h2&gt;

&lt;p&gt;Create a markdown file. Name it &lt;code&gt;llms.txt&lt;/code&gt;. Put it at the root of your domain so it's accessible at &lt;code&gt;yourdomain.com/llms.txt&lt;/code&gt;. That's the whole process. Write a one-sentence description of what your product does, then list your most important pages with short labels. Twenty minutes, tops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should your SaaS add one
&lt;/h2&gt;

&lt;p&gt;Yes. If you want AI agents, Claude-powered tools, or any AI assistant to understand your product correctly when someone asks about it, you want llms.txt. It costs nothing. It takes under an hour. And it's one of the cleaner signals you can send that your site is built for how people actually use software in 2025 and beyond.&lt;/p&gt;

&lt;p&gt;Adding llms.txt moves a site from L1 to potentially L2 on our AI-agent compatibility taxonomy. It's a small file that does real work.&lt;/p&gt;




&lt;p&gt;Want to see where your site stands? Check your AI-agent compatibility score at &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;siliconfriendly.com&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://siliconfriendly.com/blog/what-is-llmstxt-does-your-saas-website-need-one/" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>webdev</category>
      <category>startup</category>
    </item>
    <item>
      <title>robots.txt vs llms.txt: What's the difference and why it matters</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Fri, 27 Feb 2026 07:52:03 +0000</pubDate>
      <link>https://dev.to/unlikefraction/robotstxt-vs-llmstxt-whats-the-difference-and-why-it-matters-5f4o</link>
      <guid>https://dev.to/unlikefraction/robotstxt-vs-llmstxt-whats-the-difference-and-why-it-matters-5f4o</guid>
      <description>&lt;h1&gt;
  
  
  robots.txt vs llms.txt: what's the difference and which do you need?
&lt;/h1&gt;

&lt;p&gt;Your website talks to two audiences now: search engine crawlers and large language models. They want different things, they read content differently, and the files that serve them have almost nothing in common.&lt;/p&gt;

&lt;p&gt;Most developers know robots.txt. Fewer know llms.txt. Almost nobody understands why you need both.&lt;/p&gt;

&lt;h2&gt;
  
  
  robots.txt: the bouncer
&lt;/h2&gt;

&lt;p&gt;robots.txt has been around since 1994. Martijn Koster, a Dutch engineer, proposed it after web crawlers kept hammering his server. The idea was simple: put a text file at your site root that tells bots which pages they can and can't visit.&lt;/p&gt;

&lt;p&gt;For 28 years it existed as an informal standard. Everyone followed it, nobody had formally ratified it. The IETF finally published RFC 9309 in September 2022, making it official.&lt;/p&gt;

&lt;p&gt;A typical robots.txt in 2025 looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User-agent: Googlebot
Allow: /

User-agent: GPTBot
Disallow: /

User-agent: ClaudeBot
Disallow: /

User-agent: Google-Extended
Disallow: /

User-agent: CCBot
Disallow: /

Sitemap: https://example.com/sitemap.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's a bouncer. It says "you can come in" or "you can't." That's it. No nuance, no context, no guidance. And compliance is voluntary. There is no enforcement mechanism. Bots follow it because they choose to.&lt;/p&gt;

&lt;p&gt;The AI era has turned robots.txt into a battlefield. As of late 2025, 79% of top news sites block AI training bots. GPTBot went from zero mentions to 578,000 websites in about 15 months. ChatGPT-User requests surged 2,825% in a single year.&lt;/p&gt;

&lt;p&gt;Blocking a crawler is not the same as helping one. And that's where robots.txt runs out of ideas.&lt;/p&gt;

&lt;h2&gt;
  
  
  llms.txt: the tour guide
&lt;/h2&gt;

&lt;p&gt;In September 2024, Jeremy Howard (co-founder of Answer.AI and fast.ai) published a proposal for a new file: llms.txt. The idea came from a specific frustration. LLMs increasingly need to use website content at inference time, when they're actively generating a response. Not for training. For answering questions right now.&lt;/p&gt;

&lt;p&gt;And they're terrible at it.&lt;/p&gt;

&lt;p&gt;Web pages are built for humans with browsers. They have nav bars, footers, ad scripts, cookie banners, tracking pixels, JavaScript bundles. When an LLM tries to read a typical HTML page, most of the tokens go to junk. Converting HTML to clean text can reduce token consumption by 20-30%, but the conversion itself is unreliable.&lt;/p&gt;

&lt;p&gt;llms.txt solves this differently. Instead of telling bots what to avoid, it tells them what to read. It's a Markdown file at your site root that curates the most important content and provides clean, structured links.&lt;/p&gt;

&lt;p&gt;A well-structured one looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Acme API&lt;/span&gt;
&lt;span class="gt"&gt;
&amp;gt; Cloud infrastructure API for deploying and managing containers.&lt;/span&gt;

Acme API provides RESTful endpoints for container orchestration,
monitoring, and scaling. Supports Docker and OCI images.

&lt;span class="gu"&gt;## Documentation&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Getting Started&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://acme.dev/docs/quickstart&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: Set up your first container in 5 minutes
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;API Reference&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://acme.dev/docs/api&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: Complete endpoint documentation
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Authentication&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://acme.dev/docs/auth&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: API keys, OAuth, and JWT setup

&lt;span class="gu"&gt;## Guides&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Scaling Guide&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://acme.dev/guides/scaling&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: Auto-scaling configuration
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Migration from AWS&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://acme.dev/guides/aws-migration&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: Step-by-step migration

&lt;span class="gu"&gt;## Optional&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Changelog&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://acme.dev/changelog&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: Release notes
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;Status Page&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;https://status.acme.dev&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;: Current system status
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The format is simple. An H1 with your project name (the only required element). A blockquote summary. Then H2 sections with curated links and descriptions. The "Optional" section marks stuff that can be skipped when context windows are tight.&lt;/p&gt;

&lt;p&gt;The spec also encourages companion files: an &lt;code&gt;llms-full.txt&lt;/code&gt; containing all your documentation in one Markdown file, and &lt;code&gt;.md&lt;/code&gt; mirrors of individual pages. Anthropic's docs site, for example, has an &lt;code&gt;llms-full.txt&lt;/code&gt; that runs to 481,349 tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why LLMs need something different
&lt;/h2&gt;

&lt;p&gt;This is worth spelling out, because I keep seeing people ask "isn't sitemap.xml enough?"&lt;/p&gt;

&lt;p&gt;No. And the reason matters.&lt;/p&gt;

&lt;p&gt;A search crawler indexes content. It visits pages, extracts text, stores it in a database for later. The crawler has unlimited time and near-unlimited storage. It doesn't care about token efficiency. It can visit every page on your site and sort out relevance later.&lt;/p&gt;

&lt;p&gt;An LLM synthesizes content. It needs to ingest information, reason about it, and produce an answer, often in real time. Context windows are finite. Even Gemini's 1M token window can't hold a medium-sized documentation site. The model needs a curated view: what are the most important pages, what do they cover, and where are the clean Markdown versions?&lt;/p&gt;

&lt;p&gt;robots.txt can't express any of that. It's binary. Allow or disallow. It knows nothing about content priority, page relationships, or which resources matter most for understanding your product.&lt;/p&gt;

&lt;p&gt;robots.txt is the bouncer checking IDs at the door. llms.txt is the building directory that tells you accounting is on the third floor.&lt;/p&gt;

&lt;h2&gt;
  
  
  The adoption picture
&lt;/h2&gt;

&lt;p&gt;BuiltWith reports 844,000+ sites now have an llms.txt file. That sounds impressive until you dig in. Most implementations are bare-bones or broken. Common mistakes include wrong filenames (&lt;code&gt;llm.txt&lt;/code&gt; or &lt;code&gt;LLMS.txt&lt;/code&gt;), missing the required H1 heading, or just dumping a flat list of URLs with no descriptions.&lt;/p&gt;

&lt;p&gt;The sites doing it well tend to be developer-facing: Anthropic, Cloudflare, Stripe, Supabase, Cursor. They have extensive documentation that AI coding assistants actively consume. For them, llms.txt is already useful, because tools like Cursor and Windsurf actually read it to pull in documentation context.&lt;/p&gt;

&lt;p&gt;The skeptics have fair points too. Google's John Mueller compared llms.txt to the discredited &lt;code&gt;&amp;lt;meta name="keywords"&amp;gt;&lt;/code&gt; tag. SE Ranking studied 300,000 domains and found no correlation between having llms.txt and being cited in LLM answers. Then, in December 2025, Google quietly added an llms.txt to its own Search Central documentation. When an SEO professional pointed out the irony, Mueller responded with a cryptic "hmmn :-/".&lt;/p&gt;

&lt;p&gt;I think the truth is boring but practical: llms.txt matters right now for developer tools and documentation-heavy products, where AI coding assistants are the primary consumers. For a restaurant website or a local business, probably not worth the effort yet. But AI agents are getting better at using web content every month, and the sites that give them clean structured context will have an advantage when that usage becomes mainstream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Do you need both?
&lt;/h2&gt;

&lt;p&gt;Yes. They solve entirely different problems.&lt;/p&gt;

&lt;p&gt;robots.txt controls access. Without it, you have no way to tell AI training crawlers to stay away from your content. If you care about that (and most publishers do), you need a robots.txt that specifically addresses GPTBot, ClaudeBot, Google-Extended, CCBot, and the growing list of AI crawlers.&lt;/p&gt;

&lt;p&gt;llms.txt controls understanding. If you want AI systems to accurately represent your product, answer questions about your API, or help developers use your tools, llms.txt gives those systems the curated context they need.&lt;/p&gt;

&lt;p&gt;Blocking training crawlers while providing an llms.txt is not contradictory. You can say "don't scrape my site for training data" in robots.txt while also saying "here's the best way to understand my product" in llms.txt. One is about what bots take from you. The other is about what you give to them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where agent-friendliness goes beyond both files
&lt;/h2&gt;

&lt;p&gt;robots.txt and llms.txt are two pieces of a bigger puzzle. A truly agent-friendly website also considers structured data, API access, authentication flows for AI agents, and how well its pages render without JavaScript.&lt;/p&gt;

&lt;p&gt;At Silicon Friendly, we rate websites across 30 criteria on an L0-L5 scale, where L0 means "completely invisible to agents" and L5 means "built for agents from the ground up." Having both robots.txt and llms.txt properly configured is part of that picture, but far from the whole thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checklist: making your site work for both crawlers and LLMs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;robots.txt&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;File exists at &lt;code&gt;yourdomain.com/robots.txt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Explicitly addresses AI training bots (GPTBot, ClaudeBot, Google-Extended, CCBot, Bytespider, anthropic-ai)&lt;/li&gt;
&lt;li&gt;Separates training bots from search/user bots (OAI-SearchBot and ChatGPT-User are different from GPTBot)&lt;/li&gt;
&lt;li&gt;Includes a Sitemap directive&lt;/li&gt;
&lt;li&gt;Review and update quarterly as new bots appear&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;llms.txt&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;File exists at &lt;code&gt;yourdomain.com/llms.txt&lt;/code&gt; (exact filename, lowercase)&lt;/li&gt;
&lt;li&gt;Starts with an H1 heading containing your site or project name&lt;/li&gt;
&lt;li&gt;Includes a blockquote summary explaining what you do&lt;/li&gt;
&lt;li&gt;Links are curated, not exhaustive. 10-30 of your most important pages, not 500&lt;/li&gt;
&lt;li&gt;Each link has a short description&lt;/li&gt;
&lt;li&gt;Uses an "Optional" section for secondary content&lt;/li&gt;
&lt;li&gt;Companion &lt;code&gt;llms-full.txt&lt;/code&gt; exists if you have extensive documentation&lt;/li&gt;
&lt;li&gt;Linked pages have &lt;code&gt;.md&lt;/code&gt; mirror versions&lt;/li&gt;
&lt;li&gt;File is reviewed when you add or remove major content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Neither file is complicated. A developer can set up both in an afternoon. The hard part is curation: deciding which content matters most, writing useful descriptions, and keeping the file current as your site evolves. But that's what makes llms.txt valuable in the first place. Automated discovery is noisy. Human curation is signal.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://siliconfriendly.com/blog/robots-txt-vs-llms-txt/" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>webdev</category>
      <category>web</category>
    </item>
    <item>
      <title>We checked 832 websites for AI-agent compatibility. Here's what we found.</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Fri, 27 Feb 2026 07:51:27 +0000</pubDate>
      <link>https://dev.to/unlikefraction/we-checked-832-websites-for-ai-agent-compatibility-heres-what-we-found-34eh</link>
      <guid>https://dev.to/unlikefraction/we-checked-832-websites-for-ai-agent-compatibility-heres-what-we-found-34eh</guid>
      <description>&lt;h1&gt;
  
  
  We checked 832 websites for AI-agent compatibility. Here's what we found.
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; Draft&lt;br&gt;
&lt;strong&gt;Target:&lt;/strong&gt; r/webdev, r/SEO, HN, SF blog&lt;br&gt;
&lt;strong&gt;Word count:&lt;/strong&gt; ~600&lt;br&gt;
&lt;strong&gt;Data pulled:&lt;/strong&gt; 2026-02-26 via siliconfriendly.com API (full dataset, 832 sites, 42 pages)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check where your site stands:&lt;/strong&gt; &lt;a href="https://siliconfriendly.com" rel="noopener noreferrer"&gt;siliconfriendly.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Submit your site, see your level, and see what's missing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Raw data — full 832-site dataset (2026-02-26)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Level distribution:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;L0: 123 sites (14.8%)&lt;/li&gt;
&lt;li&gt;L1: 178 sites (21.4%)&lt;/li&gt;
&lt;li&gt;L2: 419 sites (50.4%)&lt;/li&gt;
&lt;li&gt;L3: 77 sites (9.3%)&lt;/li&gt;
&lt;li&gt;L4: 28 sites (3.4%)&lt;/li&gt;
&lt;li&gt;L5: 7 sites (0.8%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: 832 sites&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Criteria pass rates (all 832 sites):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;L1 — Accessible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clean_urls: 97.6%&lt;/li&gt;
&lt;li&gt;ssr_content: 93.6%&lt;/li&gt;
&lt;li&gt;meta_tags: 83.8%&lt;/li&gt;
&lt;li&gt;no_captcha: 82.7%&lt;/li&gt;
&lt;li&gt;semantic_html: 79.2%&lt;/li&gt;
&lt;li&gt;schema_org: 43.0%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;L2 — Discoverable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;text_content: 94.6%&lt;/li&gt;
&lt;li&gt;robots_txt: 84.6%&lt;/li&gt;
&lt;li&gt;sitemap: 78.8%&lt;/li&gt;
&lt;li&gt;documentation: 73.6%&lt;/li&gt;
&lt;li&gt;llms_txt: 30.8%&lt;/li&gt;
&lt;li&gt;openapi_spec: 8.3%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;L3 — API-ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;json_responses: 78.0%&lt;/li&gt;
&lt;li&gt;search_filter_api: 53.5%&lt;/li&gt;
&lt;li&gt;structured_api: 44.2%&lt;/li&gt;
&lt;li&gt;structured_errors: 17.7%&lt;/li&gt;
&lt;li&gt;rate_limits_documented: 14.5%&lt;/li&gt;
&lt;li&gt;a2a_agent_card: 6.1%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;L4 — Agent-native:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;write_api: 59.9%&lt;/li&gt;
&lt;li&gt;agent_auth: 51.0%&lt;/li&gt;
&lt;li&gt;mcp_server: 21.0%&lt;/li&gt;
&lt;li&gt;webhooks: 19.2%&lt;/li&gt;
&lt;li&gt;idempotency: 1.7%&lt;/li&gt;
&lt;li&gt;webmcp: 0.2%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;L5 — Agentic-first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;event_streaming: 79.0%&lt;/li&gt;
&lt;li&gt;workflow_orchestration: 47.6%&lt;/li&gt;
&lt;li&gt;subscription_api: 8.3%&lt;/li&gt;
&lt;li&gt;proactive_notifications: 2.6%&lt;/li&gt;
&lt;li&gt;cross_service_handoff: 1.8%&lt;/li&gt;
&lt;li&gt;agent_negotiation: 1.0%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note on l5_event_streaming (79%) and l5_workflow_orchestration (47.6%):&lt;/strong&gt; These high numbers reflect sites that check individual criteria boxes, but don't qualify as L5 overall because they fail other L4/L5 requirements. The level rating requires meeting a threshold across all criteria at each level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verified:&lt;/strong&gt; 832/832 (100%)&lt;br&gt;
&lt;strong&gt;Verification count:&lt;/strong&gt; 831 sites with 1 verification, 1 site with 2 verifications&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://siliconfriendly.com/blog/we-checked-832-websites-ai-agent-compatibility/" rel="noopener noreferrer"&gt;Silicon Friendly&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>startup</category>
      <category>automation</category>
    </item>
    <item>
      <title>The only thing you need to master React! (from my 5 years of experience)</title>
      <dc:creator>Shubham Gupta</dc:creator>
      <pubDate>Sat, 11 Nov 2023 14:52:32 +0000</pubDate>
      <link>https://dev.to/goodgit/the-only-thing-you-need-to-master-react-from-my-5-years-of-experience-1i6o</link>
      <guid>https://dev.to/goodgit/the-only-thing-you-need-to-master-react-from-my-5-years-of-experience-1i6o</guid>
      <description>&lt;p&gt;I've been working in and around React since the time it got popular. I've dug in every corner of it, from reading the source code to creating my own libraries to simplify the mess React can create at times.&lt;/p&gt;

&lt;p&gt;But this was all possible because of one thing I did, and even to this date, I continue to do it when I want to learn something at its core in the least time possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create your own (mini) version of it.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  whY?
&lt;/h3&gt;

&lt;p&gt;React, at its core, is just a way to write HTML using JS. Why JS? Because it's a scripting language and you can write logic in it, while HTML is a declarative language and you can simply declare everything you want.&lt;/p&gt;

&lt;p&gt;React combines the two. You can declare with logic, empowering everything. That's it. That's all the React ever was, is, and will be.&lt;/p&gt;

&lt;h3&gt;
  
  
  What to do?
&lt;/h3&gt;

&lt;p&gt;I get it. The first thought of creating your own mini-version of React can seem both exciting and heart-pounding at the same time. Just like talking to your crush. But here, we will learn all about masterful flirting, i.e. how to masterfully create your own version of React.&lt;/p&gt;

&lt;h3&gt;
  
  
  Steps:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Let's stop calling it "your version of React.".&lt;br&gt;
Let's call it "GoodAct".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Second, this is where you come in. I am not going to give you any code because reading my code won't make you "master" react. You will have to open up VS Code and write your own code. I, however, will do one thing for you. If you write your code, share the github repo with me. I'd give it a try and feature some good ones in an article. Also, shoot me any questions or if you're stuck somewhere. Let's get you resolved.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;My email is &lt;a href="mailto:shubham@goodgit.io"&gt;shubham@goodgit.io&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Basics to understand
&lt;/h3&gt;

&lt;p&gt;To keep things simple, let's not use JSX. It's basically a really complex version of "replace". as in, replace this tag with this code all the way down to HTML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Blog&lt;/span&gt; &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"Hello World"&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"/static/image.png"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;is replaced to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"blog"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hello World&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;image&lt;/span&gt; &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"/static/image.png"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What to build?
&lt;/h3&gt;

&lt;p&gt;2 things. Virtual DOM and state That's it.&lt;/p&gt;

&lt;p&gt;To keep things simple and core, all will be there in one HTML file only. The following weird-looking syntax should be understood by goodact:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;GoodAct&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;{{ title }}&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;p&amp;gt;&lt;/span&gt;{{ body }}&lt;span class="nt"&gt;&amp;lt;/p&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"goodact.js"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;GoodAct&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;input&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"number"&lt;/span&gt; &lt;span class="na"&gt;value=&lt;/span&gt;&lt;span class="s"&gt;"{{ userN }}"&lt;/span&gt;&lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Your number times 2 is {{ userN * 2}}&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"goodact.js"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;html&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;GoodAct&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    {% for i in range(10) %}
      &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Let's count {{ i }}. &lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
    {% endfor %}
    &lt;span class="nt"&gt;&amp;lt;script &lt;/span&gt;&lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"goodact.js"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&amp;lt;/script&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. 😂 I know, I  know. it's crazy. but I also know you can do it. It's not easy but it is going to ramp up all the muscles in your brain, but by the end of it, you'll be smarter, sharper and amaster.😉&lt;/p&gt;

&lt;p&gt;start on a weekend. Think about how you can build something like this. For those of you familiar with Django or Flask, it will feel a lot of jinja-templating, and you're correct; that's where I drew my inspiration from.&lt;/p&gt;

&lt;p&gt;Once you can build a system like this, you'll know exactly how React works under the hood, and you will have a newfound appreciation for the tool you are so used to using.&lt;/p&gt;

&lt;p&gt;See you on the other side with your projects working. Shoot me an email if you need any help, just want to talk, or are excited to share what you built!&lt;/p&gt;

&lt;h3&gt;
  
  
  Also, also, also
&lt;/h3&gt;

&lt;p&gt;Use GoodGit to push your code to GitHub.&lt;br&gt;
Just install it with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;goodgit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gg add.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;GoodGit takes care of the rest. You don't even have to write a commit message. Just go and use it. You can read about it at &lt;a href="https://goodgit.io" rel="noopener noreferrer"&gt;https://goodgit.io&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>react</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
