AI has quietly slipped into our daily developer workflows. It writes code, generates tests, summarises pull requests, and now more than ever writes documentation. From auto-generated Confluence pages to AI-produced READMEs, documentation is faster than it has ever been.
But faster does not always mean better.
When it comes to technical documentation, especially Confluence tech docs and READMEs, AI often produces content that exists but is not truly consumable. And that is where the real issue begins.
The Promise of AI in Documentation
Let me start by saying this clearly. I am not advising against the use of AI.
In fact, I was guilty of using it a lot.
It was breezy. Effortless. Shockingly fast. A Confluence page that would have taken hours appeared in minutes. It felt productive, and for a while it probably was.
AI is genuinely useful for creating structure from nothing, generating consistent templates, translating code into text, and filling in obvious sections so you are not staring at a blank page.
For teams under time pressure, that speed is hard to argue with.
The Moment It Hit Hard
The problem did not show up immediately.
It became apparent when I revisited one of those AI-generated documents a month or so later and couldn't understand the gist of it.
The words were mine. The page was complete. But the meaning did not stick.
I found myself rereading paragraphs, scrolling up and down, trying to reconstruct what the system actually did and why. That is when it clicked. The document had been written, but it had not been articulated.
The Real Purpose of Documentation
The main purpose of documentation is not to put text on a page.
It is to articulate ideas in a way that is easy to consume, easy to retain, and still clear months or even a year later.
Good documentation survives time.
You should be able to return to it after a year, skim a few sections, and immediately remember what this thing does, why it exists, and what you should be careful about.
AI-generated documentation often fails this test.
Even When We Ask Why and How, It Is Still Not Quite There
A common response to AI documentation criticism is to just prompt it better and ask for the why and how.
I tried that too.
And while it helps, the output is still usually overly verbose, abstract, and dense with concepts that sound correct but do not anchor themselves in memory.
You end up with documentation that looks thorough but is not digestible. It demands attention instead of guiding it.
Most readers will not fight through that, especially in Confluence.
Verbosity Is Not the Same as Clarity
AI often equates length with usefulness.
Instead of short explanations, clear mental models, and simple guidance like you can ignore this unless you are working on X, you get long paragraphs, repeated ideas phrased differently, and a lot of words with very little stickiness.
Because Confluence pages rarely get cleaned up, that verbosity does not get fixed. It just sits there quietly unread.
The Power of Conversational Documentation
The most effective Confluence pages and READMEs feel like a teammate explaining things to you.
They anticipate confusion. They use plain language before technical terms. They tell you what matters and what does not.
For example,
AI style:
This service abstracts the persistence layer for user-related data.
Human style:
This service exists so the rest of the app does not need to know how user data is stored. If we ever change the database, this is the only place we should need to update.
Same meaning. Very different experience.
What Humans Still Do Better Than AI
Humans capture tradeoffs and reality.
They write things like this is not ideal but it was the safest option at the time, or we optimised for speed here, or be careful changing this because it has broken things before.
AI tends to smooth over these edges. Humans document them, and that is where real value lies.
Humans also add context that does not live in the code. Past incidents, organisational constraints, decisions made under pressure. Without this context, documentation can be technically correct and practically unhelpful.
And humans reduce anxiety.
They write things like if this feels confusing, that is normal, you probably will not need to touch this, or talk to this team if you get stuck. AI rarely does this naturally.
How AI Should Be Used
This is not a warning against AI. It is a warning against unreviewed AI.
AI is a fantastic starting point, but it should not be the final author.
A healthier approach is to let AI generate the first draft and then have a human cut verbosity, remove unnecessary jargon, add context and opinion, and rewrite sections in a conversational way.
Once something lands in Confluence or a README, it tends to stay there quietly, unread, for a very long time. That is exactly why it deserves care.
Final Thoughts
AI helped me write documentation quickly. It also taught me how easy it is to mistake speed for clarity.
Documentation is not about writing everything down. It is about articulating the right things in a way that sticks.
AI can explain what exists. Humans explain what matters.
Use AI. Absolutely. Just do not let its output become permanent without a human pass, because documentation that is not consumable does not fade away.
It lingers. Unread. Forever.
Top comments (0)