The users of the web are increasingly alien. Not humans. Not traditional crawlers. AI agents like ChatGPT browsing mode, Perplexity, Claude Code and Cowork, are actively navigating websites, reading docs, and trying to make sense of your content. And most sites are completely unprepared for them.
Your site might look great to a human. It might score 100 on Lighthouse. But when an AI agent shows up, it's probably drowning in nav bars, cookie banners, and markup soup.
We've seen this movie before
Think about accessibility. Before a11y tooling existed, making a site accessible felt vague and overwhelming. Then tools like axe and Lighthouse audits came along and made it concrete: here are the issues, here's how to fix them, here's your score. Suddenly accessibility was actionable.
Same story with internationalization. i18n felt like a massive lift until tooling made it systematic: extract strings, manage translations, lint for hardcoded text.
And Lighthouse itself turned "make your site fast" from a handwavy goal into a scorecard with specific, fixable items.
So here's the question: what's the equivalent for AI-agent readiness?
That's what I built AgentLint to be.
What it does
One command. That's it.
npx @cjavdev/agent-lint https://your-site.com
AgentLint crawls your site, runs rules across several categories, and gives you a score from 0 to 100 with a letter grade. It tells you exactly what's working, what's not, and what to fix specific violations.
The five categories:
-
Transport Can agents get your content in formats they prefer? Does your site respond to
Accept: text/markdown? Is yourrobots.txtblocking AI crawlers? - Structure Is your HTML well-organized? Do headings follow a logical hierarchy? Do sections have anchor IDs agents can reference?
- Tokens How efficient is your content for LLM context windows? Are pages bloated with repeated nav and footer content?
-
Discoverability Can agents find what they need? Do you have an
llms.txt? A sitemap? An OpenAPI spec? - Agent Do you have agent-specific affordances like an MCP manifest or agent usage guides?
You get a clean report in the terminal, or pass --json for structured output you can pipe into CI.
Why I built it
I kept thinking about what makes a site "agent-friendly" and realized there was no standard way to audit it. Everyone was kind of guessing. Some folks were adding llms.txt files. Others were serving markdown. But nobody had a checklist, let alone a tool that could run it automatically.
I wanted something developers could just run — like Lighthouse, but for this new class of visitor. Point it at a URL, get a score, fix the red items. No config required, no setup, just answers.
So I built the tool I wished existed.
The rules that surprise people
Some of the checks are intuitive. Of course you should have a sitemap. But a few tend to catch people off guard:
Does your site serve markdown when asked? If an agent sends Accept: text/markdown, does your server respond with markdown instead of HTML? Most don't. But this is one of the easiest wins for agent-friendliness — agents much prefer consuming markdown over parsing HTML.
Do you have an llms.txt? This is a simple text file at your site root that tells AI models what your site is about, what content is available, and how to navigate it. Think of it like robots.txt but for AI rather than about AI.
Are your pages lean enough for context windows? LLMs have finite context. If your page dumps 15,000 tokens of nav, sidebar, and footer before getting to the actual content, agents are wasting their context budget on noise. AgentLint flags pages over a configurable token threshold.
How much of your content is boilerplate? If 40% of every page is the same nav and footer HTML, that's a ton of duplicate tokens agents have to process across pages. AgentLint measures this duplication rate.
Do you have an MCP manifest? The Model Context Protocol is emerging as a standard for agents to discover and interact with services. AgentLint checks if you've published one at /.well-known/mcp.json.
Try it
Seriously, just run it:
npx @cjavdev/agent-lint https://your-site.com
Or use it as a skill:
npx skills add cjavdev/agent-lint
Your site probably already does some of this stuff well. The score will tell you where you stand and what's worth improving. Most of the fixes are straightforward — adding a file here, tweaking a header there.
The project is open source at github.com/cjavdev/agent-lint. If you have ideas for new rules, find a bug, or just want to share your score, I'd love to hear from you.
The web is getting new visitors. Let's make sure they feel welcome.
Top comments (0)