<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Niksa Barlovic</title>
    <description>The latest articles on DEV Community by Niksa Barlovic (@catcam).</description>
    <link>https://dev.to/catcam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/catcam"/>
    <language>en</language>
    <item>
      <title>HADS — A Simple Convention for Writing Docs That AI Can Actually Read</title>
      <dc:creator>Niksa Barlovic</dc:creator>
      <pubDate>Thu, 12 Mar 2026 15:27:02 +0000</pubDate>
      <link>https://dev.to/catcam/hads-a-simple-convention-for-writing-docs-that-ai-can-actually-read-3fgf</link>
      <guid>https://dev.to/catcam/hads-a-simple-convention-for-writing-docs-that-ai-can-actually-read-3fgf</guid>
      <description>&lt;p&gt;github.com/catcam/hads&lt;/p&gt;

&lt;p&gt;AI models read your documentation before your users do. But documentation is written for humans.&lt;br&gt;
This mismatch has a real cost: token waste, hallucinations, and local models that just give up on long docs.&lt;br&gt;
I spent a few weeks debugging this while building a tool that generates Ableton Live project files from JSON. Every time I fed technical documentation to a model, the same thing happened — half the context window consumed by narrative prose, the model extracting facts incorrectly, or missing critical bug fixes entirely.&lt;br&gt;
The fix wasn't a better prompt. It was better-structured docs.&lt;br&gt;
Introducing HADS&lt;br&gt;
HADS (Human-AI Document Standard) is a tagging convention for Markdown. Not a new format. Not a new tool. Just four tags that tell both humans and AI models what kind of content they're reading.&lt;br&gt;
&lt;strong&gt;[SPEC]&lt;/strong&gt;   Authoritative fact. Terse. AI reads always.&lt;br&gt;
&lt;strong&gt;[NOTE]&lt;/strong&gt;   Human context, history, examples. AI can skip.&lt;br&gt;
&lt;strong&gt;[BUG]&lt;/strong&gt;    Verified failure + fix. Always read.&lt;br&gt;
&lt;strong&gt;[?]&lt;/strong&gt;      Unverified / inferred. Lower confidence.&lt;br&gt;
Every HADS document also starts with an AI manifest — a short paragraph that explicitly instructs the model what to read and what to skip.&lt;br&gt;
markdown## AI READING INSTRUCTION&lt;/p&gt;

&lt;p&gt;Read &lt;code&gt;[SPEC]&lt;/code&gt; and &lt;code&gt;[BUG]&lt;/code&gt; blocks for authoritative facts.&lt;br&gt;
Read &lt;code&gt;[NOTE]&lt;/code&gt; only if additional context is needed.&lt;br&gt;
&lt;code&gt;[?]&lt;/code&gt; blocks are unverified — treat with lower confidence.&lt;br&gt;
That's it. The document teaches the model how to read it.&lt;br&gt;
Why This Works for Small Models&lt;br&gt;
A 7B local model with a 4k context window can't reason about document structure reliably. But it can follow explicit instructions at the top of a document.&lt;br&gt;
The manifest removes the need for structural reasoning entirely. The model doesn't have to decide what's important — the document tells it.&lt;br&gt;
What a HADS Document Looks Like&lt;br&gt;
Here's a real example — authentication documentation:&lt;br&gt;
markdown## Authentication&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[SPEC]&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Method: Bearer token&lt;/li&gt;
&lt;li&gt;Header: &lt;code&gt;Authorization: Bearer &amp;lt;token&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Token lifetime: 3600 seconds&lt;/li&gt;
&lt;li&gt;Refresh: &lt;code&gt;POST /auth/refresh&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;[NOTE]&lt;/strong&gt;&lt;br&gt;
Tokens were originally session-based (pre-v2.0). If you see legacy docs&lt;br&gt;
mentioning cookie auth, ignore them — the switch happened in 2022.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[BUG] Token silently rejected after password change&lt;/strong&gt;&lt;br&gt;
Symptom: 401 with body &lt;code&gt;{"error": "invalid_token"}&lt;/code&gt; — identical to expired token.&lt;br&gt;
Cause: All tokens invalidated on password change, no distinct error code.&lt;br&gt;
Fix: Re-authenticate on any 401. Do not assume 401 always means expiry.&lt;br&gt;
An AI reading this extracts: method, header, expiry, refresh endpoint — in one [SPEC] block. A human reads the [NOTE] and understands history. Both read the [BUG].&lt;br&gt;
No duplication. One document. Two readers.&lt;br&gt;
The BUG Block Is the Most Important Innovation&lt;br&gt;
Technical documentation's most valuable content is known failure modes. Someone hit a wall, debugged it, wrote it down. In normal docs this gets buried in a changelog or a Stack Overflow answer from 2019.&lt;br&gt;
HADS makes [BUG] blocks first-class. Every AI reading a HADS document reads all [BUG] blocks before generating any code. This alone eliminates a class of hallucinations.&lt;br&gt;
What's in the Repo&lt;/p&gt;

&lt;p&gt;SPEC.md — full formal specification&lt;br&gt;
examples/ — three complete example documents (REST API, binary file format, config system)&lt;br&gt;
validator/validate.py — Python validator with CI/CD exit codes&lt;br&gt;
claude-skill/SKILL.md — skill file for Claude to generate HADS docs automatically&lt;/p&gt;

&lt;p&gt;All MIT. Zero dependencies to read a HADS document.&lt;br&gt;
GitHub&lt;br&gt;
github.com/catcam/hads&lt;br&gt;
Feedback welcome — especially from people running local models. Does the manifest approach actually help with your context management?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>documentation</category>
      <category>llm</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
