AI has quietly slipped into our daily developer workflows. It writes code, generates tests, summarises pull requests, and now more than ever writes documentation. From auto-generated Confluence pages to AI-produced READMEs, documentation is faster than it has ever been.
But faster does not always mean better.
When it comes to technical documentation, especially Confluence tech docs and READMEs, AI often produces content that exists but is not truly consumable. And that is where the real issue begins.
The Promise of AI in Documentation
Let me start by saying this clearly. I am not advising against the use of AI.
In fact, I was guilty of using it a lot.
It was breezy. Effortless. Shockingly fast. A Confluence page that would have taken hours appeared in minutes. It felt productive, and for a while it probably was.
AI is genuinely useful for creating structure from nothing, generating consistent templates, translating code into text, and filling in obvious sections so you are not staring at a blank page.
For teams under time pressure, that speed is hard to argue with.
The Moment It Hit Hard
The problem did not show up immediately.
It became apparent when I revisited one of those AI-generated documents a month or so later and couldn't understand the gist of it.
The words were mine. The page was complete. But the meaning did not stick.
I found myself rereading paragraphs, scrolling up and down, trying to reconstruct what the system actually did and why. That is when it clicked. The document had been written, but it had not been articulated.
The Real Purpose of Documentation
The main purpose of documentation is not to put text on a page.
It is to articulate ideas in a way that is easy to consume, easy to retain, and still clear months or even a year later.
Good documentation survives time.
You should be able to return to it after a year, skim a few sections, and immediately remember what this thing does, why it exists, and what you should be careful about.
AI-generated documentation often fails this test.
Even When We Ask Why and How, It Is Still Not Quite There
A common response to AI documentation criticism is to just prompt it better and ask for the why and how.
I tried that too.
And while it helps, the output is still usually overly verbose, abstract, and dense with concepts that sound correct but do not anchor themselves in memory.
You end up with documentation that looks thorough but is not digestible. It demands attention instead of guiding it.
Most readers will not fight through that, especially in Confluence.
Verbosity Is Not the Same as Clarity
AI often equates length with usefulness.
Instead of short explanations, clear mental models, and simple guidance like you can ignore this unless you are working on X, you get long paragraphs, repeated ideas phrased differently, and a lot of words with very little stickiness.
Because Confluence pages rarely get cleaned up, that verbosity does not get fixed. It just sits there quietly unread.
The Power of Conversational Documentation
The most effective Confluence pages and READMEs feel like a teammate explaining things to you.
They anticipate confusion. They use plain language before technical terms. They tell you what matters and what does not.
For example,
AI style:
This service abstracts the persistence layer for user-related data.
Human style:
This service exists so the rest of the app does not need to know how user data is stored. If we ever change the database, this is the only place we should need to update.
Same meaning. Very different experience.
What Humans Still Do Better Than AI
Humans capture tradeoffs and reality.
They write things like this is not ideal but it was the safest option at the time, or we optimised for speed here, or be careful changing this because it has broken things before.
AI tends to smooth over these edges. Humans document them, and that is where real value lies.
Humans also add context that does not live in the code. Past incidents, organisational constraints, decisions made under pressure. Without this context, documentation can be technically correct and practically unhelpful.
And humans reduce anxiety.
They write things like if this feels confusing, that is normal, you probably will not need to touch this, or talk to this team if you get stuck. AI rarely does this naturally.
How AI Should Be Used
This is not a warning against AI. It is a warning against unreviewed AI.
AI is a fantastic starting point, but it should not be the final author.
A healthier approach is to let AI generate the first draft and then have a human cut verbosity, remove unnecessary jargon, add context and opinion, and rewrite sections in a conversational way.
Once something lands in Confluence or a README, it tends to stay there quietly, unread, for a very long time. That is exactly why it deserves care.
Final Thoughts
AI helped me write documentation quickly. It also taught me how easy it is to mistake speed for clarity.
Documentation is not about writing everything down. It is about articulating the right things in a way that sticks.
AI can explain what exists. Humans explain what matters.
Use AI. Absolutely. Just do not let its output become permanent without a human pass, because documentation that is not consumable does not fade away.
It lingers. Unread. Forever.
Top comments (15)
We loved your post so we shared it on social.
Keep up the great work!
This is great! Thanks @sloan
Wow, this is such a thoughtful and insightful piece! I love how you balanced praising AI’s speed while highlighting the irreplaceable value of human context and clarity—your reflections really resonate with anyone who’s struggled with unread documentation.
Yes, I struggled quite a bit. I work as a software consultant, and a big part of my role is documenting systems so teams can be self-sufficient. I remember working on an identity component in AWS and having it documented with AI. Authentication is one of those areas where context evaporates quickly, so it wasn’t surprising that a couple of months later, I couldn’t understand what was actually supposed to be done :)
That’s such an honest and relatable experience, and it really highlights how tricky authentication and long-term context can be. It’s clear you put real care into making systems understandable for others, which is a skill many teams deeply benefit from.
Accurate. AI usually does well for other AI constructs to use in analysis etc. However for Humans to use with some understanding a human is best in the final say. Usability by humans can be best checked by humans. Yes it is slower but actually better.
First Draft AI, final say human. I have experienced this as in our company we do leverage AI heavily for analysis. The final take on the results of the analysis is and needs to be human.
I can totally relate to your concern, and thanks for putting it out!
Thank you!
The speed with which AI can generate texts is undoubtedly impressive. By analyzing trends and tendencies, it can accomplish in minutes what would often take humans much longer. However, this efficiency comes at a price: the language often loses its individuality and can become overly complex, polished, and somewhat lifeless.
Really well put. This nails the difference between documentation that exists and documentation that actually helps. Using AI for structure is great, but without a human pass it turns into verbose, forgettable text. The point about docs needing to survive time and explain tradeoffs really hit home. Speed ≠ clarity.
AI is great at producing text, but documentation definitely about shared understanding over time.
Definitely, these days, I rely on it to get started, mostly for the template or the document skeleton, because that part does take time. But I always proofread and strip out all the extra junk it tends to generate.
I use it to structure something I've written but I don't use it for the actual content. I really can't bear its output
Idea: feed a shortened version of this article to AI asking it to write a guideline for writing READMEs. Then ask to save it as custom slash command /update-readme globally. Now in every project you can write just /update-readme and have READMEs that should avoid problems you described here. Of course to get exactly where you want to get you need to do some iterations, telling it to avoid things that you notice in your test outputs until it gets it right. It's an art.
Nice idea. A slash command like this could help set some basic rules.
The tricky part, as you hinted, is iteration. AI tends to hallucinate and over-explain, which is exactly why a lot of READMEs get ignored in the first place. Without human judgment and pruning, even a good command can produce noisy docs.
So I see this working best as a starting point, not an autopilot. The human edit step is still the most important one.
Thanks for sharing this perspective.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.