AI can write decent documentation now. That’s the point. The uncomfortable part is what happens when “decent text” becomes cheap.
If your work has ever started with “here are some notes, turn this into a doc,” you already know where the pressure shows up. An LLM can take messy inputs and produce something that looks like a finished page. But good documentation is not the same thing as good-looking writing. Most doc failures come from missing context, outdated details, contradictions, and information that exists in ten places with no single source of truth.
That gap is where I think the role is moving: from producing words to designing knowledge. I’ve been calling it the shift from technical writer to knowledge architect.
A knowledge architect isn’t just polishing prose. They’re shaping the system behind the prose. What content belongs in reference vs. a guide. How information is organized so users can find it fast. How to prevent drift when the product changes. How to decide what the canonical answer is when the wiki, the help center, and the chatbot all disagree.
AI is useful here, but not as a replacement for the work. More like a power tool.
AI can help draft a first pass, summarize SME interviews, suggest variants for different audiences, and surface areas that look inconsistent. It can even help you find duplicates by comparing pages at scale. But it cannot own the responsibility for truth, and it cannot reliably make judgment calls about what your users need first, what they will misunderstand, or what the business cannot afford to get wrong.
In practice, this shift looks like doing more work in these “knowledge” layers:
You spend more time defining information architecture and less time formatting paragraphs. You create content models and templates that make docs consistent across teams. You push for single sources of truth and controlled vocabularies so content doesn’t fragment. You build governance. You make docs testable, maintainable, and easier to update when engineering inevitably changes the thing.
It’s also a mindset shift. You stop measuring your value by output volume and start measuring it by outcomes: reduced support load, faster onboarding, fewer repeated questions, fewer “tribal knowledge” bottlenecks, and fewer incidents caused by bad guidance.
I wrote a deeper version of this argument here, with more detail on what to learn next and how to talk about the shift in a way that makes sense to hiring managers and teams: https://aitransformer.online/technical-writer-to-knowledge-architect/
If you’re a technical writer, content designer, devrel writer, or anyone who maintains product knowledge, I’m curious: what part of your work feels most “AI-resistant” because it’s really about judgment and structure, not wording?

Top comments (0)