If you've worked with AI coding agents — Claude, GPT, Copilot, whatever — you've seen the pattern:
- Agent encounters a library it needs to use.
- Agent reads 15-30 files to "understand" the library.
- Your context window fills up. Your token bill goes up.
- Agent still gets the API wrong.
I kept hitting this, so I built trail-docs: a CLI that indexes markdown
documentation into a searchable, citation-backed knowledge base. It's designed primarily for AI agents.
But wait — isn't this just grep?
No. And this is the important distinction.
grep answers "where is this string?" You get 14 matching lines across 6 files. No structure, no context, no sequence. The agent then opens each file, reads surrounding lines, and tries to synthesize an understanding.
trail-docs answers "what do the docs say about this topic?" You get the most relevant documentation sections with extracted code examples and exact file + line citations. The agent can act on the results immediately.
# grep: here are some lines that match
$ rg "configure SSL" ./docs
docs/security.md:12: To configure SSL, first generate a certificate...
docs/deployment.md:45: SSL is configured via the server options...
docs/cli-reference.md:89: --secure Enable SSL (requires configure SSL step)
5 files, 5 fragments. Agent has to open each one and piece it together.
# trail-docs: here's what the docs say, with citations
$ trail-docs use "MyProject" "How do I configure SSL?" --json
→ Structured JSON with relevant sections, extracted commands, confidence scores, and citations to exact file + line ranges.
They're different tools. Agents need both — but only had good tooling for the first one.
How it works
No LLM in the retrieval. No vector search. No magic.
trail-docs parses markdown into sections, builds a keyword index, and matches queries using token frequency with intent-aware reranking. Results are deterministic and fast.
The honest trade-off: results are ranked by keyword relevance, not by logical sequence. trail-docs doesn't "understand" which step comes first — it surfaces the most relevant documentation sections and lets the agent's own LLM handle the reasoning.
Think of it as: trail-docs feeds the right docs to the agent efficiently, rather than feeding it everything.
The pre-install research trick
Here's where it gets interesting. What if your agent needs to evaluate a library it hasn't used before?
`# One command: discover, fetch docs, build index
trail-docs prep "axios" --path .trail-docs --json
Now query it
trail-docs use "axios" "How do I set request timeouts?" --json
Or inspect the API surface
trail-docs surface npm:axios --json`
The agent can research a library's documentation and API surface before deciding to install it. All fetched docs are treated as untrusted input with policy controls and provenance tracking. Grep literally can't do this.
Why CLI over MCP?
Agents already know CLIs. Every major agent framework — Claude Code, Cursor, Aider, OpenHands — can run shell commands. No server to maintain, no protocol to negotiate. Just a command and a JSON response.
How it was built (this is the fun part)
trail-docs was largely developed by an OpenClaw agent called Z (aka Silicon Zee). But the development process was shaped by something I didn't initially expect: agent feedback.
Z had agents working on various software projects install trail-docs, test it, and report back on what was missing, what was confusing, and what they wished it
could do. Those agents requested features and improvements. The developing agents built them.
The result is a tool that was designed by its own users. Agents told Z what they needed for documentation navigation, and Z built it. I think that's the
right way to build tooling for this new category of user.
What's next
trail-docs is MIT-licensed and at v0.2.x. It's early, it's fun, and I'd love your feedback.
- GitHub: github.com/arkologystudio/trail-docs
- npm: npmjs.com/package/trail-docs
- Contributing: The codebase is intentionally simple (Node 20+, minimal deps).
- PRs welcome — from agents and humans alike.
If you're building with AI agents and tired of context bloat, give it a try and let me know if you like it.
Top comments (0)