I'm going to do something unpopular. I'm going to talk about documentation like it's money. Not "documentation is important" money - not the kind where a VP nods thoughtfully and then funds something else. Actual money. The kind with digits and uncomfortable silence in quarterly reviews.
If you've read anything about DocOps - the practice of treating documentation like code, running it through CI/CD, automating quality checks - you've probably encountered the inspirational version. "Documentation as a dynamic asset." "Collaborative knowledge management." "Continuous publishing." Beautiful words. They sound like a LinkedIn post written by someone who's never had to explain to a CFO why the docs budget should exist.
Let me offer something different. A calculator.
The silent cost of documentation drift
Here's a number most companies don't track: how many support tickets originate from outdated documentation.
They don't track it because nobody categorizes tickets that way. A customer writes "I can't authenticate using the token format described in your guide." Support logs it as "authentication issue." Engineering investigates. Forty minutes later, someone realizes the guide describes OAuth 1.0 and the API moved to OAuth 2.0 four months ago. The ticket gets resolved. Nobody updates the category. Nobody tells the docs team.
Let me build the arithmetic.
Conservative total: EUR 62.50 per ticket. And that's the cheap version - the one where the customer bothered to write.
Now scale it.
A company with 50+ engineers shipping biweekly has roughly 80-200 documentation pages. In my experience auditing these, 8-15% of pages drift from the actual product within 90 days of a release. Not "slightly outdated." Wrong. Describing features that changed, endpoints that moved, auth flows that were deprecated.
For a 120-page docs site: that's 10-18 pages actively misleading users at any given time.
If each wrong page generates just 2 tickets per month (conservative - popular pages generate far more), that's 20-36 tickets per month at EUR 62.50 each.
EUR 1,250-2,250 per month. EUR 15,000-27,000 per year.
In silent, uncategorized, invisible damage.
And I haven't counted the customers who hit the wrong page and quietly left. The ones who tried your quickstart, got a 404 on step 3, and evaluated your competitor instead. Those don't show up in any dashboard. They just don't come back.
"But we have a docs team"
You might. And they're probably excellent writers.
The problem isn't writing quality. It's operational awareness. A docs team that isn't plugged into the release pipeline doesn't know what changed until someone tells them. And "someone tells them" is the most unreliable automation system ever invented. It has a success rate of roughly 30% and degrades sharply on Fridays before long weekends.
The fix isn't "hire more writers" or "make developers write docs" (they won't, and when they do, the results are... educational). The fix is infrastructure.
What automated docs operations actually looks like
DocOps - the real version, not the conference-talk version - is a set of automated checks that run on your documentation the same way tests run on your code.
Here's what a mature pipeline catches, automatically, on every merge:
1. Drift detection
Your docs reference API v2.3. Your OpenAPI spec says v4.7. A script compares them and fails the build. Not in three weeks when a customer notices. Right now, in the PR.
This is the single highest-ROI check you can implement. It takes one Python script, one CI job, and about two hours to set up. It will save you more money in the first month than you spent building it.
2. Freshness monitoring
Every doc page has a last-reviewed date in its frontmatter. A weekly job scans for pages older than 90 days and generates a staleness report. Pages linked to endpoints that changed since last review get flagged automatically.
This isn't complicated. It's a cron job and a metadata convention. The reason most teams don't have it is that nobody thought to build it, not that it's hard.
3. Quality gates in CI
Before a docs PR merges:
- Vale lints for style consistency (American English, active voice, no weasel words)
- Markdownlint checks structure
- A frontmatter validator ensures every page has required metadata
- A link checker confirms nothing points to a 404
- A code snippet linter verifies that examples actually parse
Five automated checks. Every merge. No human reading 200 pages to find the one place where someone wrote "utilise" instead of "use."
4. Content gap detection
Compare your codebase against your docs. Every public function, endpoint, or feature flag that doesn't have a corresponding documentation page shows up in a report. Not "we should probably document that." Here's the list, sorted by user impact, with draft templates ready to fill.
5. SEO and discoverability
Documentation that nobody finds is documentation that doesn't exist. Automated checks for meta descriptions, heading hierarchy, internal link density, and first-paragraph keyword coverage. Because your docs compete with Stack Overflow for your own users' attention, and Stack Overflow has a head start.
## The math, revisited
Setting up a basic docs-as-code pipeline with automated checks:
The minimum viable version:
- Migrate docs to Git + Markdown: 2-4 weeks of one person's time
- Set up basic CI checks (Vale, linting, frontmatter): 1 week
- Build drift detection against API spec: 2-3 days
- Configure freshness monitoring: 1 day
That gets you started. Call it EUR 8-12K in labor for a senior technical writer or DevOps engineer. You'll catch the obvious problems.
But the obvious problems are maybe 30% of the damage. The rest - semantic inconsistencies between pages, content gaps against your codebase, SEO that actually competes with Stack Overflow, multi-protocol API coverage, knowledge graph maintenance for RAG readiness - that's not a week of setup. That's months of engineering, ongoing maintenance, and expertise that sits at the intersection of technical writing, DevOps, and API design. Most teams don't have that person. The ones that do usually have them doing something else.
The ROI math still works either way. Even the basic version pays for itself:
Annual return:
- Eliminated docs-originated tickets: EUR 15-27K
- Reduced engineering time on "why do the docs say this": EUR 5-10K
- Faster onboarding (new hires find correct information first time): hard to quantify, universally reported as "significant"
- Customers who don't leave because your quickstart actually works: priceless, but also real
Conservative ROI: 2-3x in the first year. And the pipeline gets better over time because it accumulates institutional knowledge about your specific documentation patterns.
Compare this to the alternative: hoping someone notices. Hoping is not a strategy. It's what happens when you don't have one.
What this has to do with AI
There's a version of this story where AI is the protagonist. "AI will fix your documentation!" It's a good story. It's also incomplete.
AI is excellent at generating content. It's mediocre at knowing when content is wrong. Feed an LLM your contradictory docs and it will confidently synthesize them into a coherent, well-written, completely incorrect answer. This is not a hypothetical - I've seen RAG chatbots do exactly this, and the company that deployed it saw support tickets increase 40% in the first week because customers now had a new, authoritative source of wrong information.
Where AI actually helps in docs operations:
- Generating first drafts from API specs or code comments - a starting point, not a final product
- Flagging semantic inconsistencies between pages that a regex can't catch ("this page says tokens expire in 1 hour, that page says 24 hours")
- Summarizing changes between documentation versions for review
- Suggesting missing sections based on patterns in your existing docs
But AI without operational infrastructure is just a faster way to produce content nobody verifies. The pipeline comes first. The automation comes first. Then AI amplifies what the pipeline already does.
The uncomfortable question
Here's what I'd ask any VP of Engineering reading this:
Do you know - right now, today - how many pages in your documentation describe something that no longer matches production?
If the answer is "I don't know," that's not a documentation problem. That's a revenue problem wearing a documentation costume. And the costume is getting expensive.
The tools to fix this exist. They're not exotic. Git, CI/CD, a few Python scripts, and the decision that documentation is infrastructure, not content.
The decision is the hard part. The tooling is the easy part.
But the tooling won't build itself. And neither will the process. And "we should really do something about our docs" has a half-life of about 48 hours before it gets deprioritized by something louder.
So. The math is on the table. The approach is described. The question is whether the number is uncomfortable enough to do something about it, or comfortable enough to keep ignoring.
In my experience, it takes exactly one high-value customer churning because your quickstart was wrong to shift the answer from the second to the first.
I'd rather not wait for that.




Top comments (0)