Recently, Cloudflare released a tool called “Is Your Site Agent-Ready?” to evaluate how well a website supports AI agents.
I ran it against my site (https://www.kraftdorian.com).
The result: low coverage (level 1–2).
More importantly, it exposed something I wouldn’t normally think about:
how a website is consumed by agents, not just users.
That was enough to dig deeper.
What actually mattered
The tool highlighted two things:
-
Markdown for Agents — serving content as
text/markdownvia content negotiation - Content Signals — declaring how content can be used by AI systems
Both are simple on paper.
Neither is trivial if you want to do it properly.
The goal
I wanted to:
- keep my current setup (NGINX + NJS SSR)
- avoid introducing a new backend layer or parallel content pipeline
- keep HTML as canonical output
- expose markdown as an alternative representation
- avoid duplicating content
The core idea
Instead of adding a separate markdown endpoint, I used HTTP content negotiation.
Same URL:
- default → HTML
-
Accept: text/markdown→ markdown
The same URL, no separate endpoints.
One content tree. Two outputs.
Implementation
The change was scoped to homepage SSR.
At the core:
- parse the
Acceptheader - resolve preferred media type (including
qvalues) - render either:
- HTML (
text/html) - markdown (
text/markdown)
- HTML (
Both responses include:
Vary: Accept
Negotiation rules
I followed Cloudflare’s semantics:
-
Accept: text/markdown→ markdown -
Accept: text/*→ markdown -
Accept: */*→ HTML - highest
qwins - tie → HTML
HTML stays the default for browsers.
Markdown is opt-in.
One source of truth
The key decision:
HTML and markdown are rendered from the same AST.
- HTML → standard renderer
- markdown →
markdown.renderToString(...)
No duplication.
No parallel templates.
Constraints (where it gets real)
Some things don’t translate 1:1.
HTML version uses JS obfuscation.
Markdown:
no email exposed
fallback: “Switch to HTML version”
Preserving intent > exposing more data.
Images
Markdown is generated using a custom renderer built for this setup (separate from the HTML renderer, not yet published).
Instead of trying to mirror HTML 1:1, I treated markdown as a different representation with its own constraints.
For non-essential visuals (like the portrait), I used explicit fallbacks:
Portrait: Damian Kajzer.
This keeps the output readable and intentional, instead of forcing structural parity with HTML.
Structure
Footer was blending into content in markdown.
Fix:
## Additional Information
Simple, but necessary.
What I didn’t implement
I skipped x-markdown-tokens for now.
Tokenization is something I’d treat as a post-MVP concern, once there’s a clearer need and a good fit for how it should be exposed.
For this iteration, I focused on getting the core negotiation and representation right.
Content Signals
The second piece was simpler.
Added:
Content-Signal: ai-train=no, search=yes, ai-input=no
- in
robots.txt - and as HTTP header (homepage)
Minimal change. No architectural impact.
Aligned with the expected signal surface.
Validation
- tested negotiation with
curl(explicitAcceptvariations) - verified both representations
- deployed to production
- re-ran Cloudflare check
Result:
→ Level 4 (Agent-Integrated)
What this changes
The interesting part isn’t markdown.
It’s this:
A website no longer has a single “correct” representation.
HTML works for users.
Markdown works better for agents.
And both can coexist — without rebuilding the system.
Final notes
- no new backend
- no framework changes
- no library modifications
- full control at the edge (NGINX + NJS)
Next step:
extract markdown support into my nginx-njs-ssr-starter as a reusable building block.
Top comments (0)