DEV Community

Cover image for Audit Your Docs Against The Decision-System Framework
DanielleWashington
DanielleWashington

Posted on

Audit Your Docs Against The Decision-System Framework

Audit Your Docs Against the Decision-System Framework

Most documentation fails not because something is missing, but because it answers the wrong question.

We've built encyclopedias when people need guides. We've cataloged every possible path when what readers actually need is someone to say: start here, this way works. The result is what I call the library paradox. All the information exists, users just can't find their way to the answer that matters for their specific situation at this specific moment. The problem isn't missing information. The problem is missing navigation.

doc-audit is a CLI tool that operationalizes this. It scans your markdown files, classifies each one by decision phase, scores how well it guides a reader toward action, and gives you a prioritized list of what to fix. This post walks through how to use it and how to think about what it's telling you.


The framework

Every person who arrives at your documentation is standing at a crossroads. They're not asking "what does this do?" They're asking "what should I do, given my constraints, right now?" That's a decision, not a knowledge transfer. Decisions require navigation, not encyclopedias.

The Day 0/1/2 framework maps the decision landscape across the full adoption lifecycle.

Day 0, Pre-Commitment. The reader hasn't adopted your tool yet. They're evaluating: is this right for my use case? What am I signing up for? What are the tradeoffs I should understand before I commit? Think of this as the moment before buying hiking boots. You need to know if you're going on day hikes or through-hiking the Appalachian Trail. The boots you need are different. If your docs skip this phase, you're losing people before they ever install anything.

Day 1, Getting Started. The reader has committed. They want the fastest path from zero to working state. They need a clear sequence, a default configuration that fits most cases, and a success signal at the end. This is base camp. Get the tent up, get oriented, don't try to summit on your first day.

Day 2, Production. The reader is past the happy path. Something broke, or they're scaling, or conditions changed. They need troubleshooting guides, operational runbooks, and explicit "if X is happening, do Y" callouts. Day 2 readers are not reading for pleasure on a Sunday morning. They're in the moment of crisis and they need the answer fast.

Each phase has its own decision architecture. Mapping these explicitly is what turns a documentation suite from an information dump into a navigation system.

The framework maps the human reader's journey, but your docs now have a second audience that moves through that same content completely differently.

There's a second dimension the tool measures: whether your docs are written for both audiences now consuming them. Your human reader and an agent acting on their behalf have fundamentally different needs. Humans can ask follow-up questions, fill in gaps with context, probe for answers they didn't know they needed. Agents cannot. They fill gaps not with judgment but with hallucination. A document with clear decision points, explicit next steps, and no assumed context serves both.


Run this against your own docs

Node.js 18 or higher.

git clone https://github.com/DanielleWashington/doc-audit
cd doc-audit
npm install
Enter fullscreen mode Exit fullscreen mode

To use the doc-audit command anywhere:

npm link
Enter fullscreen mode Exit fullscreen mode

Verify it:

doc-audit --help
Enter fullscreen mode Exit fullscreen mode

Step 1: Run the audit

Point doc-audit at any directory containing .md files. The bundled sample docs are a good place to start:

node index.js ./test-docs
Enter fullscreen mode Exit fullscreen mode

This launches interactive mode. Before any prompts appear, doc-audit has already analyzed every file. The interview that follows lets you review and correct that analysis.

What the static analysis does

doc-audit reads each file and runs signal matching against three dictionaries.

Day 0 signals look for evaluation language: overview, why use, compare, alternatives, when to use, who is this for. Day 1 signals look for onboarding language: install, quickstart, tutorial, getting started, how to, prerequisites. Day 2 signals look for operational language: troubleshoot, debug, error, production, failing, incident, rollback.

The phase with the most matches wins. This isn't perfect. A file titled quickstart.md could have Day 2 content, which is exactly why the interview step exists.

Alongside phase detection, each file gets a quality score out of 8. Six signals are checked:

Signal Points What it measures
If/then guidance 2 if ... then/→/do/try/run patterns, the clearest marker of decision-oriented writing
Tradeoff language 2 "when not to", "limitation", "⚠", "not recommended", does it help users see around corners?
Ordered steps 1 Numbered lists signal a sequence to follow
Learning outcome 1 "by the end", "you will learn", sets expectations and helps readers self-select
Next steps 1 "next steps", "proceed to", does it hand the reader off somewhere?
Focused length 1 Under 1,000 words. Docs that try to serve all three phases usually serve none well

The if/then and tradeoff signals carry double weight because they're the hardest to fake. A doc can have ordered steps and a next steps section and still be a knowledge dump. A doc with explicit "if your use case is X, do Y, if it's Z, consider this instead" language has crossed into decision-oriented territory. That's the difference between a map and turn-by-turn directions.

The interactive interview

For each file, you'll see its detected phase, quality score, and a short prompt:

──────────────────────────────────────────────────
  api-reference.md
  892 words  ·  Auto-detected: Day 1 — Getting Started
  Decision quality: 2/8
──────────────────────────────────────────────────

? What do you want to do with this file?
❯ Audit it
  Skip it
  Flag for deletion
Enter fullscreen mode Exit fullscreen mode

Action. Skip excludes the file from the report, useful for auto-generated files, changelogs, or anything that shouldn't be part of the audit. Flag for deletion marks it as a candidate for removal, shown separately in the report. Choose Audit it to continue.

Phase confirmation. Correct the auto-detected classification here. An api-reference.md might have been tagged as Day 1 because it mentions how to use, but it's actually a reference doc that spans all three phases. Fix it.

Decision quality. You're asked whether the doc recommends a clear path, presents options but leaves the decision to the reader, or mostly describes without guiding action. Be honest. This is the distinction the whole framework turns on.

If/then and next step. Two quick yes/no confirmations that override the static analysis when your judgment differs from the regex.


Step 2: Read the report

╔═══════════════════════════════════════════════════════╗
║  doc-audit Report                                     ║
║  /projects/my-tool/docs                               ║
╚═══════════════════════════════════════════════════════╝

PHASE COVERAGE
  Day 0  ██░░░░░░░░   1 of 4    ⚠ Undercovered
  Day 1  ██████░░░░   2 of 4    ✓
  Day 2  ░░░░░░░░░░   0 of 4    ✗ Missing

DECISION QUALITY (avg: 3.8/8)
  ✓ quickstart.md                  6/8  — Strong decision doc
  ⚠ overview.md                    4/8  — Partially decision-oriented
  ✗ api-reference.md               2/8  — Knowledge dump
  ✗ architecture.md                3/8  — Knowledge dump

RECOMMENDATIONS
  1. Missing Day 2 content — add a troubleshooting or production guide
  2. api-reference.md — low decision quality. Add "If X → do Y" callouts...
  3. architecture.md — no If/then guidance detected...
Enter fullscreen mode Exit fullscreen mode

Phase Coverage is the structural view. A missing phase means readers at that stage of the adoption lifecycle have nothing to reach for. This is where the pattern shows up every time: Day 1 coverage is solid, Day 0 is weak, Day 2 is almost entirely absent. Teams onboard users and then abandon them in production.

Decision Quality is the per-file view. Knowledge dump means the file explains things but doesn't help the reader decide or act. Strong decision doc means it has explicit guidance, surfaces tradeoffs, and ends by pointing somewhere.

Recommendations are prioritized. Phase gaps come first because they're the largest structural problem. Per-file findings follow.


Step 3: Auto mode and exports for CI

For CI or fast trend-tracking, use auto mode:

doc-audit ./docs --auto
Enter fullscreen mode Exit fullscreen mode

No prompts, static analysis only. Useful for catching regressions: a phase that was covered last sprint but isn't now, or an average quality score that's drifting down as docs accumulate.

To capture results for downstream processing:

doc-audit ./docs --json > audit.json
Enter fullscreen mode Exit fullscreen mode

Pipe it into a script that fails CI if a phase is missing or the average quality score drops below your threshold.

For a shareable written record:

doc-audit ./docs --output docs-audit-march.md
Enter fullscreen mode Exit fullscreen mode

Step 4: Act on what you find

Missing Day 0 content. Write a document that explicitly answers "should I use this?", covering what problem it solves, what it's not good for, and how it compares to the obvious alternatives. End it with a decision branch. If your use case is X, proceed to the quickstart. If it's Y, you might want Z instead. Give the reader somewhere to go either way.

Missing Day 2 content. Mine your issue tracker, Slack history, or support queue for the five most common things that go wrong. For each one, write an if X is happening → do Y entry. Day 2 readers are in crisis mode. A doc without explicit structure forces both the human reader and any agent acting on their behalf to fill the gap themselves, and agents fill gaps with hallucination.

Low quality score on an existing doc. The fastest fix is adding if/then guidance. Find every place the doc describes a choice and make it explicit: if you're using PostgreSQL, use the connection pool strategy. If you're on SQLite, skip this section entirely. One well-placed callout can move the score significantly.

Partially decision-oriented docs. These present options but hedge on recommending one. Sometimes that's appropriate, context genuinely varies. But often it's just that no one has committed to a recommended path. If that's the case, commit. Move the edge cases into a collapsible section. Surface the tradeoffs explicitly. Providing a framework where you can't be prescriptive is still better than leaving the reader at a crossroads with no guidance.


Where to start

If running this against a large docs suite feels overwhelming, start with your highest-traffic pages. Ask one question for each: what decision does this doc help a reader make? If you can't answer that in one sentence, that's your signal. You don't need to rewrite everything. Tightening the scope of a page to serve one phase, one decision, and one next step is usually enough to move it from Knowledge dump to Partially decision-oriented.

The goal isn't comprehensive coverage measured by word count. It's whether users can move from uncertainty to confident action with minimal friction. Documentation that does that is a growth lever. Documentation that doesn't generates support tickets asking "should I use X or Y?" for questions the docs technically answered but never resolved.

Run the audit. Fix the phase gaps. Add the if/then guidance. Run it again.

The docs that actually move people forward were never the most comprehensive ones. They were the ones that knew which question the reader was standing in front of, and answered it


Built on the documentation framework by Danielle Washington: Documentation is a Decision System, Not a Knowledge Base and The Documentation Framework We Need Doesn't Exist Yet.

Try the web app here

Top comments (0)