DEV Community

Cover image for I kept saying the same five things in every doc review. So I made them defaults.
John Rojas
John Rojas

Posted on

I kept saying the same five things in every doc review. So I made them defaults.

In my last piece, I wrote about how AI changed my work as a technical writer. I closed with a line I keep coming back to: technical writing is shifting from producing content to building the gates, constraints, and workflows that ensure content is accurate before it ever reaches a reader. This piece is about one of those constraints, named: five things I check in every documentation review, and how I went from reciting them out loud every time to having them run by default.

How I got here

I was reviewing PRs alongside an AI agent. The agent was helpful. The agent was also unevenly helpful. Some reviews came back sharp. Others missed things any first read should have caught. The pattern wasn't in what the agent did. It was in what I kept telling it.

At first, I was iterating on prompts. Trying different framings, different phrasings, different orders of operations. Some produced sharper reviews. Some didn't. And I wasn't writing any of it down.

Then I'd remember a recent review that had come out particularly well, and I'd want to use the same approach again, and I'd realize I had no idea what I'd told the agent that time. I found myself reconstructing the prompts from memory. And badly at that. Yikes.

So I opened a Slack message to myself and pasted in my current working prompt. The one with the five things I kept asking the agent to check. Then I closed Slack and got back to work.

The next time I started a review, I opened that Slack message and pasted the prompt into Cursor. The time after that, I did the same thing. And the time after that. And the time after that. And another. And another. You know where this is going.

I want to say this didn't go on for long. But it did. OMG it did. I was apparently fine pasting the same wall of text into every conversation, multiple times a day, until my own laziness finally lit up a bulb.

If I'm pasting the same instructions every time, the agent shouldn't be relying on me to paste them. They should just be there. By default.

Something else was eating at me in the background. Every long-pasted prompt was eating up part of the context window before the agent had even started reading the diff. That seemed wasteful. (Spoiler: it was.)

So I started baking the five dimensions into my local preferences. Not the agent rules. Not the slash commands. Just my own preferences and local memory, where I could iterate without touching anything the rest of the team relied on. I wanted to test whether the bake-in actually preserved the catches I was getting from manual prompting. If the local version produced worse reviews than my paste-everything version, I'd know early. And nobody else would have to deal with it.

It didn't produce worse reviews. It produced consistent ones, which is its own kind of better.

So I named the thing I'd been doing all along.

Clarity. Readability. Style. Technical accuracy. Completeness.

Five dimensions. Each one a specific kind of failure I'd seen the agent commit, and a specific kind of catch I'd had to make on my own pass through the diff. Once they lived in the rules, the realization landed: this isn't a checklist I should be reciting from memory before every review. It's a checklist that should run by default.

The five dimensions

Clarity

Technical documentation has an obscurity problem. Layers of concepts, configuration options, dependencies, prerequisites. Somewhere underneath the torrent of context, the actual thing you came to learn. Clarity is the dimension that asks: can a reader who doesn't already know this still follow it?

It catches the sentence that assumes too much. The acronym used before it's defined. The reference to a concept that's actually three concepts. The instruction that skips a step because the author considered it obvious.

When clarity is missing, the reader rereads. Then they reread again. Then they give up and ask in Slack. Documentation that needs a translation isn't documentation. It's a draft.

Readability

This one is structural, not conceptual. Readability is about how text feels on the page. The sentence that runs three clauses long without breath. The paragraph that's nine lines tall and looks like a wall. The bulleted list that's actually seven sub-points masquerading as one.

The goal isn't "easy" prose. The goal is prose someone can absorb without working at it. Chunking. Variation. White space. The boring craft of making information accessible to a reader who is tired, distracted, or scanning while their build runs.

When readability is missing, the eye glazes. The reader skims past the part that matters. Worse, they skip the page entirely and search for an answer somewhere else.

Style

Every team has a style guide. Most are imperfect. All of them are non-negotiable inside the codebase they belong to.

Style is the dimension that catches drift from those rules. Present tense where past tense slipped in. The term we use vs. the term we don't. Sentence case in headings when it should be title case, or the other way around. Boolean phrasing when we've agreed on declarative.

I'm a stickler for directional language in particular. We don't allow it. I've been known to chase every instance of "via" across a doc set and hunt each one down to oblivion. (Nah, I just replace them with something else.) It's an itch I have to scratch, and honestly, this kind of work is what I live for. (My obsessive tendencies might be showing. I'm fine with it.)

It's also the dimension AI agents are most likely to silently abandon mid-document. Style rules are pattern-matching against a long list of small constraints. Context windows get full, attention drifts, and what should have been a will/would catch becomes a missed will. The agent didn't forget the rule. It just stopped applying it.

When style is missing, the doc set drifts. Voice fragments. Readers feel the inconsistency before they can name it.

Technical accuracy

This is the highest-stakes dimension. Technical accuracy is the dimension that asks: is this actually true?

It catches the configuration property that doesn't exist. The endpoint behavior described backwards. The version compatibility claim that contradicts the upgrade guide. The example that compiled in someone's head but not in a terminal.

The configuration property that doesn't exist is the one I keep coming back to. I added this dimension specifically because of one passage I reviewed that read beautifully. Confident, authoritative, smooth. Then I went to check the configuration properties it referenced. They didn't exist. Not anywhere in the codebase. The text was perfectly written and completely wrong. That's when technical accuracy became its own line item on every review.

The check has two parts. Source-grounded: does this match the codebase, the specs, the schemas, the actual source of truth? Cross-document consistent: does this conflict with what we've already said elsewhere?

Both halves matter. A statement can be technically correct against the code and still wrong in the context of the broader documentation. Two pages can each be internally consistent and contradict each other. AI agents are particularly good at producing both kinds of failure, in perfect sentences, with confident phrasing.

When technical accuracy is missing, readers act on bad information. That's the worst outcome documentation can produce. Everything else is just polish.

Completeness

This is the dimension most external to the doc itself. Completeness asks: did we address what the ticket actually said?

It catches the requirement that's documented in the PR description but not in the docs. The acceptance criterion that got lost between planning and merge. The new partial that needs a nav entry. The feature flag that quietly changes behavior in a way someone needs to know about.

The trick with completeness is that it requires holding two artifacts in mind at once: the change being reviewed, and the requirements that motivated the change. AI agents handle this poorly by default. They look at the diff and review the diff. They forget to look back at the ticket and ask whether the diff is enough.

When completeness is missing, the docs ship and a customer finds the gap two releases later.

The five at a glance

Dimension What it catches What missing it looks like
Clarity Concepts that need decoding Reader rereads, gives up, asks in Slack
Readability Walls of text, no rhythm Eye glazes, page gets skipped
Style Drift from style guide rules Voice fragments, inconsistency felt
Technical accuracy Hallucinations and contradictions Readers act on bad information
Completeness Missing requirements or acceptance criteria Customer finds the gap two releases later

What naming didn't solve

Baking the five dimensions into the rules solved one problem. It didn't solve all of them.

Naming the dimensions told the agent what to check. It didn't tell the agent how to verify the check had actually happened. There's a gap between "the agent ran a clarity pass" and "the agent ran a clarity pass and the output exists somewhere on disk that I can read." For a while, I didn't see that gap. The agent would report clean across all five dimensions, and I'd believe it, because the dimensions were named and the dimensions were "covered."

Then I started catching things on my own review pass that the agent had marked clean. Why? That's a tale for another time.

What might be missing

Five dimensions felt like enough when I wrote them down. I'm less sure of it now.

A few I've been mulling over, but haven't committed to:

Accessibility. Heading hierarchy, alt text on images, link labels that say more than "click here," screen reader behavior on tables and code blocks. Most style guides treat this as a subset of style. I'm not convinced it deserves to live under style; the failure modes are too different.

Cross-document consistency. This currently sits inside technical accuracy ("doesn't contradict what we've said elsewhere"). But the failure mode is structural, not factual. Two pages can each be true and still pull a reader in opposite directions. Worth a separate dimension? Maybe.

Discoverability. A page that's accurate, clear, well-styled, and complete is still useless if no one can find it. Nav placement, search terms, related-links suggestions. Probably belongs somewhere in the framework. I just don't know where.

Audience appropriateness. A doc written for the wrong reader fails before any of the other dimensions get a chance to. Did we write this for the developer, the admin, or the integrator? Each one needs a different shape.

None of these are settled. They might fold into the existing five with a sharper definition. They might warrant standalone dimensions. They might turn out not to matter at the level of automated review. I don't know what I don't know yet. The fun is in finding out.

Are these the kinds of things you check for in your own reviews? I'd be curious to hear what's on your list.

What's next

This is the first piece in a small series I'm working through on AI-assisted doc reviews. Next up: what happened when I discovered that naming the dimensions wasn't enough to make the agent actually check them.

The build is never finished. You just keep improving it. That still suits me fine.


I'm sharing this because the framework isn't a destination. It's a working hypothesis. Five named dimensions emerged from my own habits and my own catches, and the next reviewer to read past them might find a sixth. If you've shaped something similar from your own practice, or watched these dimensions miss things mine hasn't yet, I'd genuinely love to hear about it.

I write about AI-assisted documentation workflows, developer experience, and the evolving role of technical writers. If any of this resonates, let's connect on LinkedIn.

Top comments (0)