What if AI crawlers could bypass downloading entire websites and only receive structured JSON data instead? Serving AI bots this way could drastically reduce inefficiencies, improve response times, and optimize resource use. Imagine delivering just the essential data—product names, prices, descriptions—without unnecessary UI, animations, or assets.
This idea was inspired by Vercel’s blog post on AI crawlers and their growing influence on SEO. Here are some highlights that led to this thought:
AI crawlers are gaining ground:
GPTBot, Claude, AppleBot, and PerplexityBot made 1.3 billion fetches last month, accounting for 28% of Googlebot’s activity.JavaScript rendering limitations:
Only Google’s Gemini and AppleBot fully render JavaScript. Others, like GPTBot and Claude, fetch JavaScript files but don’t execute them, leaving dynamic content partially ignored.Different content priorities:
GPTBot emphasizes HTML, while Claude focuses on images. This indicates varying strategies or early-stage optimization by AI crawlers.Significant inefficiencies:
Over 30% of requests by AI crawlers hit invalid URLs, highlighting the need for better URL strategies and reduced waste.Why developers should care:
Server-side rendering is more vital than ever to ensure AI crawlers access meaningful content, alongside regular users.
These insights inspired the concept of serving JSON-only responses to AI crawlers, streamlining interactions between websites and bots.
For the full analysis and practical tips, read the original blog post here:
Top comments (0)