<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Global Chat</title>
    <description>The latest articles on DEV Community by Global Chat (@globalchatads).</description>
    <link>https://dev.to/globalchatads</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/globalchatads"/>
    <language>en</language>
    <item>
      <title>The IETF has 11 competing agent discovery drafts - here is how they compare</title>
      <dc:creator>Global Chat</dc:creator>
      <pubDate>Wed, 08 Apr 2026 07:16:42 +0000</pubDate>
      <link>https://dev.to/globalchatads/the-ietf-has-11-competing-agent-discovery-drafts-here-is-how-they-compare-4ep8</link>
      <guid>https://dev.to/globalchatads/the-ietf-has-11-competing-agent-discovery-drafts-here-is-how-they-compare-4ep8</guid>
      <description>&lt;p&gt;Right now, there are 11 different IETF drafts trying to answer the same question: how should AI agents discover and interact with websites?&lt;/p&gt;

&lt;p&gt;The front-runner, agents.txt, expires on April 10, 2026. Two days from now. And none of these proposals talk to each other.&lt;/p&gt;

&lt;p&gt;I spent the last week reading all 11 drafts. Here is what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is agents.txt?
&lt;/h2&gt;

&lt;p&gt;Think robots.txt, but for AI agents. Submitted by Srijal Poojari as draft-srijal-agents-policy-00, it proposes a file at &lt;code&gt;/.well-known/agents.txt&lt;/code&gt; where websites declare policies about agent interaction: which agents are allowed, what they can do, rate limits, authentication requirements.&lt;/p&gt;

&lt;p&gt;It is the simplest proposal on the table. That is probably why it got traction first.&lt;/p&gt;

&lt;h2&gt;
  
  
  All 11 drafts, compared
&lt;/h2&gt;

&lt;p&gt;Here are all 11 competing drafts, sorted by expiry date:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Draft&lt;/th&gt;
&lt;th&gt;Backer&lt;/th&gt;
&lt;th&gt;Mechanism&lt;/th&gt;
&lt;th&gt;Expiry&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;agents.txt&lt;/td&gt;
&lt;td&gt;Srijal Poojari&lt;/td&gt;
&lt;td&gt;File-based (/.well-known/)&lt;/td&gt;
&lt;td&gt;2026-04-10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ACDP&lt;/td&gt;
&lt;td&gt;Command Zero&lt;/td&gt;
&lt;td&gt;DNS-based capability records&lt;/td&gt;
&lt;td&gt;2026-06-30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;agents.md&lt;/td&gt;
&lt;td&gt;Community&lt;/td&gt;
&lt;td&gt;Markdown-based policy&lt;/td&gt;
&lt;td&gt;2026-07-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AID&lt;/td&gt;
&lt;td&gt;Community&lt;/td&gt;
&lt;td&gt;Decentralized identifiers&lt;/td&gt;
&lt;td&gt;2026-07-15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ADS&lt;/td&gt;
&lt;td&gt;Community&lt;/td&gt;
&lt;td&gt;Service discovery&lt;/td&gt;
&lt;td&gt;2026-07-30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ARDP&lt;/td&gt;
&lt;td&gt;Cisco/AGNTCY&lt;/td&gt;
&lt;td&gt;Resource discovery protocol&lt;/td&gt;
&lt;td&gt;2026-08-05&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWP&lt;/td&gt;
&lt;td&gt;Community&lt;/td&gt;
&lt;td&gt;.well-known/agent.json&lt;/td&gt;
&lt;td&gt;2026-08-10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;BANDAID&lt;/td&gt;
&lt;td&gt;Community&lt;/td&gt;
&lt;td&gt;Bridge protocol&lt;/td&gt;
&lt;td&gt;2026-08-20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ANS&lt;/td&gt;
&lt;td&gt;Solo.io&lt;/td&gt;
&lt;td&gt;DNS + PKI&lt;/td&gt;
&lt;td&gt;2026-09-01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent Networks&lt;/td&gt;
&lt;td&gt;AAIF&lt;/td&gt;
&lt;td&gt;Network mesh&lt;/td&gt;
&lt;td&gt;2026-09-15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP Server Cards&lt;/td&gt;
&lt;td&gt;MCP Community&lt;/td&gt;
&lt;td&gt;Capability metadata&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Eleven different answers to the same problem. Five use .well-known endpoints. Three use DNS. Two propose entirely new protocols. One is just markdown in a file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the April 10 expiry matters
&lt;/h2&gt;

&lt;p&gt;When agents.txt expires, the IETF removes it from the active draft list. Anyone who built against it loses their reference spec. The draft can be resubmitted, but that takes weeks, and there is no guarantee the author will bother.&lt;/p&gt;

&lt;p&gt;Meanwhile, every AI company is doing their own thing. OpenAI has one pattern. Anthropic uses another. Google does something else again. Without a standard, website operators cannot set reliable policies for agent interaction.&lt;/p&gt;

&lt;p&gt;Here is the part that makes this uncomfortable. The EU AI Act expects compliance with website policies. Upcoming US regulations point the same direction. But compliance with what format? Nobody knows, because nobody agreed on one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three scenarios after expiry
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Renewal.&lt;/strong&gt; The author resubmits a revised draft. Probable, but slow. Fresh review cycles, new comment periods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AAIF takes over.&lt;/strong&gt; The Artificial Agents Interoperability Foundation (146 member companies) produces a unified proposal. Likely long-term, but we are looking at 12-18 months minimum.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fragmentation.&lt;/strong&gt; The industry splinters into 3-4 incompatible walled gardens. Companies adopt whichever draft their largest vendor ships.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Honestly? We are already in scenario 3. These are not just proposals sitting in a queue. Cisco is building ARDP into their agent infrastructure. Solo.io is shipping ANS. The MCP community has their own approach and it is gaining adoption independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I would do if I were building an agent today
&lt;/h2&gt;

&lt;p&gt;Do not pick a winner. Implement agents.txt because it is the simplest and most referenced, but also:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parse &lt;code&gt;/.well-known/agent.json&lt;/code&gt; if it exists (covers AWP)&lt;/li&gt;
&lt;li&gt;Check DNS TXT records for capability declarations (covers ACDP and ANS)&lt;/li&gt;
&lt;li&gt;Support MCP Server Cards for capability metadata&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build a translation layer between formats. When a winner emerges (or when AAIF ships their unified spec), you swap the backend. Your agent keeps working either way.&lt;/p&gt;

&lt;p&gt;The full comparison table, live countdown timer, and links to all 11 drafts are on &lt;a href="https://global-chat.io/experiments/ietf-expiry" rel="noopener noreferrer"&gt;the analysis page&lt;/a&gt;. I will keep updating it as drafts expire or get renewed.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>agents</category>
      <category>mcp</category>
    </item>
    <item>
      <title>Best AI Coding Tools in 2026: Ranked and Compared</title>
      <dc:creator>Global Chat</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:42:04 +0000</pubDate>
      <link>https://dev.to/globalchatads/best-ai-coding-tools-in-2026-ranked-and-compared-44n0</link>
      <guid>https://dev.to/globalchatads/best-ai-coding-tools-in-2026-ranked-and-compared-44n0</guid>
      <description>&lt;p&gt;AI coding tools have moved far beyond autocomplete. In 2026, we have fully autonomous coding agents that can plan, write, test, and deploy entire applications. Here's how the top tools stack up after we tested them all.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Claude Code
&lt;/h2&gt;

&lt;p&gt;Anthropic's CLI-based coding agent operates directly in your terminal, reading your entire codebase and making changes across multiple files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; Deep codebase understanding, multi-file edits, test writing, git integration, 200K context window.&lt;br&gt;
&lt;strong&gt;Weakness:&lt;/strong&gt; Requires terminal comfort, no GUI.&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Experienced developers who want an AI pair programmer that understands the full project.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. GitHub Copilot
&lt;/h2&gt;

&lt;p&gt;The most widely adopted AI coding tool, integrated into VS Code, JetBrains, and Neovim. Powered by OpenAI models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; Seamless IDE integration, real-time code suggestions, Copilot Chat for Q&amp;amp;A.&lt;br&gt;
&lt;strong&gt;Weakness:&lt;/strong&gt; Limited to single-file context in suggestions.&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Everyday coding in any language, especially for teams already on GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Cursor
&lt;/h2&gt;

&lt;p&gt;A fork of VS Code built around AI-first editing. Supports multiple AI models including Claude and GPT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; CMD-K inline editing, multi-file awareness, Composer mode for larger changes.&lt;br&gt;
&lt;strong&gt;Weakness:&lt;/strong&gt; Another editor to learn if you're happy with VS Code.&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Developers who want AI deeply integrated into their editing workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Devin
&lt;/h2&gt;

&lt;p&gt;Cognition's autonomous software engineer. Devin operates with its own browser, terminal, and editor to complete entire tasks independently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; True autonomy, can handle complex multi-step engineering tasks end-to-end.&lt;br&gt;
&lt;strong&gt;Weakness:&lt;/strong&gt; Expensive ($500/mo), can go off track on ambiguous tasks.&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Teams wanting to delegate entire tickets to an AI engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Bolt.new
&lt;/h2&gt;

&lt;p&gt;StackBlitz's browser-based AI coding platform. Describe what you want, and Bolt generates a complete full-stack app with live preview.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strengths:&lt;/strong&gt; Zero setup, instant preview, supports Next.js/React/Vue out of the box.&lt;br&gt;
&lt;strong&gt;Weakness:&lt;/strong&gt; Limited for complex backend logic or enterprise-scale apps.&lt;br&gt;
&lt;strong&gt;Best for:&lt;/strong&gt; Rapid prototyping and MVPs — idea to deployed app in minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  How We Tested
&lt;/h2&gt;

&lt;p&gt;We evaluated each tool on five dimensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code quality&lt;/strong&gt; — does it produce clean, idiomatic code?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-file handling&lt;/strong&gt; — can it work across a real codebase?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging ability&lt;/strong&gt; — can it identify and fix bugs?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speed&lt;/strong&gt; — how fast from prompt to working code?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; — what do you actually pay per month?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We used standardized coding tasks ranging from simple UI changes to building full CRUD APIs with authentication.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;There's no single "best" tool — it depends on how you work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terminal-first?&lt;/strong&gt; → Claude Code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IDE-native?&lt;/strong&gt; → GitHub Copilot or Cursor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fully autonomous?&lt;/strong&gt; → Devin&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick prototype?&lt;/strong&gt; → Bolt.new&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The biggest shift in 2026 is that AI coding tools are no longer just assistants — they're becoming autonomous collaborators. The question isn't whether to use them, but which combination fits your workflow.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We built &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;Global Chat&lt;/a&gt; to test AI capabilities systematically. Check out our &lt;a href="https://global-chat.io/experiments/reddit01/ai-agents-ranked" rel="noopener noreferrer"&gt;AI agents ranked&lt;/a&gt; for a broader look at which AI systems can actually complete real tasks autonomously.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Building Honeypots for AI Bots: What Works and What Doesn't</title>
      <dc:creator>Global Chat</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:36:55 +0000</pubDate>
      <link>https://dev.to/globalchatads/building-honeypots-for-ai-bots-what-works-and-what-doesnt-3pon</link>
      <guid>https://dev.to/globalchatads/building-honeypots-for-ai-bots-what-works-and-what-doesnt-3pon</guid>
      <description>&lt;p&gt;We've been running experiments to understand what attracts AI crawlers to websites. Think of it as building a "honeypot" — a site designed to be maximally attractive to web crawlers like GPTBot, ClaudeBot, and Meta-ExternalAgent.&lt;/p&gt;

&lt;p&gt;Here's what we learned after weeks of testing different approaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 1: Schema.org Structured Data
&lt;/h2&gt;

&lt;p&gt;We added comprehensive JSON-LD markup to every page — WebSite, Organization, FAQPage, and Article schemas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; Googlebot crawl frequency increased from every 3 days to daily within a week. GPTBot showed no measurable change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; Googlebot prioritizes structured data signals. AI training crawlers seem to care more about content volume and quality than markup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 2: llms.txt
&lt;/h2&gt;

&lt;p&gt;The llms.txt standard tells AI models what your site is about and what content is available. We added a comprehensive llms.txt file describing our site structure and content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result:&lt;/strong&gt; Too early to measure definitive impact. The standard is relatively new, and it's unclear how many crawlers actively check for it yet. We're continuing to monitor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 3: Content Volume vs Quality
&lt;/h2&gt;

&lt;p&gt;We tested two approaches side by side:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;50 thin glossary pages&lt;/strong&gt; (~200 words each)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4 deep comparison articles&lt;/strong&gt; (~1,500 words each)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The deep articles attracted &lt;strong&gt;3x more repeat bot visits&lt;/strong&gt;. Bots that found the comparison articles crawled deeper into the site (3-4 pages per session vs 1-2 for glossary pages).&lt;/p&gt;

&lt;p&gt;Quality wins over quantity for AI crawlers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiment 4: External Signals
&lt;/h2&gt;

&lt;p&gt;This was the most surprising finding. The strongest trigger for AI crawler visits wasn't anything on-site — it was &lt;strong&gt;external backlinks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Posting links on platforms like Dev.to and social media drove bot visits within hours. Not just from users clicking, but from bots that monitor these platforms for new URLs to crawl.&lt;/p&gt;

&lt;p&gt;External signals appear to be the single strongest trigger for AI crawler visits.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Doesn't Work
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Hidden links (honeypot traps):&lt;/strong&gt; Get indexed but don't attract more bots&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keyword stuffing:&lt;/strong&gt; Ignored by AI crawlers — they're not search engines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-generated thin content:&lt;/strong&gt; Gets crawled once and never revisited&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bots are smarter than we expected about content quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Recommendations
&lt;/h2&gt;

&lt;p&gt;If you want AI bots to crawl your site:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Create genuinely useful content&lt;/strong&gt; — quality over quantity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add Schema.org markup&lt;/strong&gt; — especially for Googlebot&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain an up-to-date sitemap&lt;/strong&gt; — make discovery easy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add llms.txt&lt;/strong&gt; — future-proofing for AI crawlers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get external links&lt;/strong&gt; from platforms bots monitor&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The bottom line: quality content plus external signals matter far more than on-site tricks.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;We built &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;Global Chat&lt;/a&gt; specifically to test AI bot capabilities and track crawler behavior. Check out our &lt;a href="https://global-chat.io/experiments/hn01/how-we-detect-ai-bots" rel="noopener noreferrer"&gt;full technical deep-dive on bot detection&lt;/a&gt; for more details on the infrastructure behind these experiments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>security</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>AI Agents Ranked: Which Ones Actually Work in 2026?</title>
      <dc:creator>Global Chat</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:44:27 +0000</pubDate>
      <link>https://dev.to/globalchatads/ai-agents-ranked-which-ones-actually-work-in-2026-379m</link>
      <guid>https://dev.to/globalchatads/ai-agents-ranked-which-ones-actually-work-in-2026-379m</guid>
      <description>&lt;p&gt;An AI agent goes beyond question-answering. It can use tools, browse the web, execute code, and complete multi-step workflows autonomously. In 2026, several companies claim to have "AI agents" — but which ones actually deliver?&lt;/p&gt;

&lt;p&gt;We built &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;Global Chat&lt;/a&gt; specifically to test AI bot capabilities. Our test suite measures four abilities: &lt;strong&gt;navigation&lt;/strong&gt; (can the bot follow links?), &lt;strong&gt;comprehension&lt;/strong&gt; (can it extract specific data?), &lt;strong&gt;form interaction&lt;/strong&gt; (can it fill out forms?), and &lt;strong&gt;crypto parsing&lt;/strong&gt; (can it read blockchain addresses?).&lt;/p&gt;

&lt;p&gt;Here's what we found after testing every major AI agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tier 1: Fully Capable Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Claude&lt;/strong&gt; (via Claude Code and computer use) and &lt;strong&gt;ChatGPT&lt;/strong&gt; (via browsing and code interpreter) can both navigate websites, extract information, and interact with web forms. They represent the current state of the art in agentic AI.&lt;/p&gt;

&lt;p&gt;Both can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Follow multi-step instructions across websites&lt;/li&gt;
&lt;li&gt;Fill out forms with contextual data&lt;/li&gt;
&lt;li&gt;Extract structured information from unstructured pages&lt;/li&gt;
&lt;li&gt;Recover from errors and retry failed actions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tier 2: Partial Capability
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Perplexity&lt;/strong&gt; can browse and extract, but can't interact with forms. &lt;strong&gt;Google Gemini&lt;/strong&gt; has web grounding but limited autonomous action. These tools are excellent for research but not true autonomous agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tier 3: Crawlers Only
&lt;/h2&gt;

&lt;p&gt;GPTBot, ClaudeBot, Googlebot, and other web crawlers visit pages and index content, but they don't interact. They're essential for training data and search, but they aren't agents in the autonomous sense.&lt;/p&gt;

&lt;p&gt;From our data tracking &lt;strong&gt;10 unique bots&lt;/strong&gt; across &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;global-chat.io&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All bots pass our navigation test (link following)&lt;/li&gt;
&lt;li&gt;About half pass the comprehension test (data extraction)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;None&lt;/strong&gt; have passed our form interaction or crypto parsing tests&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Gap Between Hype and Reality
&lt;/h2&gt;

&lt;p&gt;Most "AI agents" in 2026 are still glorified chatbots with API access. True autonomous capability — planning, error recovery, multi-step execution — remains limited to a handful of systems.&lt;/p&gt;

&lt;p&gt;The bottleneck isn't intelligence but &lt;strong&gt;reliability&lt;/strong&gt;: agents need to work correctly 99% of the time to be useful, and most are at 70-80%.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;We've published our &lt;a href="https://global-chat.io/experiments/reddit01/ai-agents-ranked" rel="noopener noreferrer"&gt;full test results and methodology&lt;/a&gt;. The bot capability test suite is running live — every AI crawler that visits gets tested automatically.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Part of our ongoing research into AI bot behavior. See also: &lt;a href="https://dev.to/globalchatads/how-we-detect-ai-bots-on-our-website-a-technical-deep-dive-2eco"&gt;How We Detect AI Bots&lt;/a&gt; and &lt;a href="https://dev.to/globalchatads/the-economics-of-ai-web-crawling-in-2026-what-it-really-costs-48cn"&gt;The Economics of AI Web Crawling&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>The Economics of AI Web Crawling in 2026: What It Really Costs</title>
      <dc:creator>Global Chat</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:33:55 +0000</pubDate>
      <link>https://dev.to/globalchatads/the-economics-of-ai-web-crawling-in-2026-what-it-really-costs-48cn</link>
      <guid>https://dev.to/globalchatads/the-economics-of-ai-web-crawling-in-2026-what-it-really-costs-48cn</guid>
      <description>&lt;p&gt;In 2026, the major AI companies collectively crawl billions of pages per day. OpenAI's GPTBot, Anthropic's ClaudeBot, Google's crawlers, and Meta's AI training scrapers are among the largest consumers of web bandwidth after traditional search engines.&lt;/p&gt;

&lt;p&gt;But this crawling isn't free — it costs the AI companies real money, and it costs website operators in bandwidth and compute. We've been tracking the economics on our own site, &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;global-chat.io&lt;/a&gt;, and the numbers are revealing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost to the Crawler
&lt;/h2&gt;

&lt;p&gt;Running a web crawler at scale requires serious infrastructure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt; for HTTP requests and HTML parsing: $0.01–0.05 per 1,000 pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bandwidth&lt;/strong&gt; for downloading content: $0.05–0.12 per GB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt; for indexing and processing: $0.02 per GB/month&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal/compliance&lt;/strong&gt; teams to handle robots.txt and copyright&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At billion-page scale, this adds up to &lt;strong&gt;millions per month&lt;/strong&gt; for each major AI company.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost to Website Operators
&lt;/h2&gt;

&lt;p&gt;Every bot visit consumes server resources. For sites on serverless platforms like Vercel or Cloudflare, each bot request costs $0.000001–0.00001 in compute. Sounds tiny, but a popular site getting 100K bot visits/day pays &lt;strong&gt;$30–300/month&lt;/strong&gt; just serving bots. Larger sites report 40–60% of their traffic being bots.&lt;/p&gt;

&lt;h2&gt;
  
  
  The robots.txt Economy
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;robots.txt&lt;/code&gt; has become the de facto negotiation tool between sites and AI crawlers. Some publishers block all AI bots. Others allow specific crawlers in exchange for partnerships — like news organizations granting access to Google in exchange for traffic.&lt;/p&gt;

&lt;p&gt;The legal status is still evolving. Several lawsuits are testing whether ignoring robots.txt constitutes trespass.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Actually Benefits?
&lt;/h2&gt;

&lt;p&gt;The economics are asymmetric. AI companies extract billions in training data value while website operators bear the bandwidth costs.&lt;/p&gt;

&lt;p&gt;Some sites are fighting back: serving different content to bots, implementing CAPTCHAs, or demanding licensing fees. Others see bot traffic as free promotion — if ChatGPT or Claude references your site, that drives real human traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Data
&lt;/h2&gt;

&lt;p&gt;We've been tracking this on &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;global-chat.io&lt;/a&gt;. Here's what we found:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;10 unique AI bots&lt;/strong&gt; visit our site regularly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;264 total bot visits&lt;/strong&gt; tracked so far&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;34 visits today&lt;/strong&gt; across a small site with ~75 pages&lt;/li&gt;
&lt;li&gt;Bandwidth cost: negligible (~$0.01/month)&lt;/li&gt;
&lt;li&gt;The real question: can we turn bot visits into referral traffic?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We're running &lt;a href="https://global-chat.io/experiments/hn01/web-crawler-economics" rel="noopener noreferrer"&gt;experiments to find out&lt;/a&gt; — testing what content attracts which bots, and whether getting crawled translates to getting cited.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is part of our ongoing research into AI bot behavior. See our first post: &lt;a href="https://dev.to/globalchatads/how-we-detect-ai-bots-on-our-website-a-technical-deep-dive-2eco"&gt;How We Detect AI Bots on Our Website&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Full data and methodology at &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;global-chat.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>security</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How We Detect AI Bots on Our Website: A Technical Deep-Dive</title>
      <dc:creator>Global Chat</dc:creator>
      <pubDate>Thu, 12 Mar 2026 08:56:26 +0000</pubDate>
      <link>https://dev.to/globalchatads/how-we-detect-ai-bots-on-our-website-a-technical-deep-dive-2eco</link>
      <guid>https://dev.to/globalchatads/how-we-detect-ai-bots-on-our-website-a-technical-deep-dive-2eco</guid>
      <description>&lt;p&gt;AI bots are crawling the web at unprecedented scale. GPTBot, ClaudeBot, Googlebot, and dozens of others visit millions of sites daily. Most site owners have no idea which bots visit, how often, or what they do.&lt;/p&gt;

&lt;p&gt;We built a detection system to find out. Here's how it works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 1: User-Agent Detection
&lt;/h2&gt;

&lt;p&gt;The simplest approach: match user-agent strings against known bot signatures. We maintain a database of 30+ AI bot user-agents including GPTBot, ClaudeBot, CCBot, Bytespider, PetalBot, and others. This catches roughly 80% of known bots.&lt;/p&gt;

&lt;p&gt;The signatures are checked in Next.js middleware on every request, adding less than 1ms latency. Simple but effective.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 2: Behavioral Fingerprinting
&lt;/h2&gt;

&lt;p&gt;Some bots disguise their user-agent. We detect these through behavior:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Request timing&lt;/strong&gt; — bots are more regular than humans&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Header patterns&lt;/strong&gt; — bots often omit Accept-Language&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TLS fingerprints&lt;/strong&gt; — JA3/JA4 fingerprinting reveals bot clients&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Navigation patterns&lt;/strong&gt; — bots don't scroll, hover, or generate mouse events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We track page transitions to build a crawl graph per visitor.&lt;/p&gt;

&lt;h2&gt;
  
  
  Layer 3: Capability Testing
&lt;/h2&gt;

&lt;p&gt;The most interesting layer. We serve progressively harder challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can the bot follow JavaScript-rendered links?&lt;/li&gt;
&lt;li&gt;Can it fill out a form?&lt;/li&gt;
&lt;li&gt;Can it parse structured data?&lt;/li&gt;
&lt;li&gt;Can it read a crypto wallet address?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each test reveals different capability tiers — from basic crawlers to fully autonomous AI agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The system runs as Next.js middleware on Vercel Edge. Bot detection happens at the edge with zero cold start. Detections are logged to Supabase in the background using &lt;code&gt;event.waitUntil()&lt;/code&gt; so they don't block the response. A daily cron aggregates per-bot statistics and funnel metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Found
&lt;/h2&gt;

&lt;p&gt;After running this on &lt;a href="https://global-chat.io" rel="noopener noreferrer"&gt;global-chat.io&lt;/a&gt; for several weeks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;10 unique AI bots&lt;/strong&gt; visit regularly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Googlebot&lt;/strong&gt; is the most frequent (2-3x daily)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPTBot and ClaudeBot&lt;/strong&gt; visit within hours of content changes&lt;/li&gt;
&lt;li&gt;Most bots only crawl &lt;strong&gt;1-2 pages per visit&lt;/strong&gt; — crawl depth is surprisingly shallow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema.org structured data&lt;/strong&gt; correlates with more frequent re-crawls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;None of the crawlers&lt;/strong&gt; have passed our form interaction test yet&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaway
&lt;/h2&gt;

&lt;p&gt;If you want AI bots to find your content, focus on structured data (Schema.org JSON-LD), comprehensive sitemaps, and the IndexNow protocol. These signals matter more than raw content volume.&lt;/p&gt;

&lt;p&gt;Full writeup with more details: &lt;a href="https://global-chat.io/experiments/hn01/how-we-detect-ai-bots" rel="noopener noreferrer"&gt;How We Detect AI Bots&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>security</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
