For me, the most frustrating part of technical writing was never the writing.
It was the waiting.
Waiting for a subject matter expert to have a free hour. Waiting for a developer to confirm whether something changed between releases. The information existed. It just lived in someone else's head, or somewhere in a codebase I wasn't expected to touch.
To be fair, waiting wasn't always the only option. I've always preferred finding things out for myself, digging into a ticket, tracing a thread, unblocking my own work rather than sitting on a question. But self-directed digging has its own cost. It takes time, it can send you down the wrong path, and without the right tools it's easy to spend an hour finding half an answer. The codebase was there. I just didn't have a good way in.
That changed when my team started experimenting with AI-assisted documentation workflows.
Where I started
I didn't build the workflow. I was asked to test it.
A colleague had been developing an AI-assisted approach to documentation and brought me in to try it, break it, and help refine it before we rolled it out to the wider team. That process taught me more about how AI actually works in practice than any course I've taken.
The core idea was straightforward. Instead of waiting for a developer to explain a feature, we used AI tools to investigate the codebase directly. Read the source. Check the specs. Cross-reference existing documentation. Then plan what to write before touching a single file.
Refining the workflow felt a lot like optimizing a character build in a game. You start with what you have, figure out what works, pick up better tools along the way, and keep tweaking until the whole thing plays smoothly. There's no perfect final state. You just keep improving the build.
What I noticed
Three things stood out.
The first was speed. Not because AI writes faster, but because the back-and-forth shrinks. Fewer dependency chains. Less time blocked waiting for input that may or may not arrive before the deadline.
The second was confidence. I could start a piece of work with a clearer picture of what I was walking into. Instead of beginning from a blank page and a vague (sometimes even empty, shocker, I know) ticket description, I had a verified starting point grounded in what the code actually does. That changes how you work.
The third was autonomy. I could start work on a ticket without needing anyone's permission or availability. That shift is harder to explain than it sounds. After years of structuring your work around other people's calendars, being able to just start is genuinely different.
What I also learned
Three things here too, and none of them are about prompting.
The first is that AI-assisted workflows are only as good as the information they work with. Garbage in, garbage out. If the source material is incomplete or inconsistent, the output reflects that. You still need to know what good documentation looks like. You still need to catch what the AI misses. Judgment doesn't go away. It just gets applied differently.
The second is that critical thinking becomes more important, not less. You're not just reviewing writing anymore. You're reviewing claims. Every output needs to be read with the question: is this actually true, or does it just sound true? AI doesn't flag uncertainty the way a careful human writer might. It states things. Confidently. Whether they're correct or not. In an age where misinformation spreads faster than corrections, accuracy and fact-checking are non-negotiable. That doesn't change just because the content came from an AI. If anything, it becomes more important.
The third is that domain knowledge and attention to detail are not optional. They're your last line of defense. AI will describe a configuration property that doesn't exist, get the behavior of an endpoint backwards, and do it in perfect sentences. It also skips style rules, not always because it doesn't know them, but because context window limits or what I can only describe as selective laziness means it stops applying them mid-document. Directional language, incorrect linking patterns, inconsistent terminology. If you don't know the subject matter well enough to catch a wrong answer, or the style guide well enough to catch a broken rule, those errors walk straight out the door into your published docs.
Where I think this is going
Technical writing is changing. The role is shifting from producing content to something broader: building the gates, constraints, and workflows that ensure content is accurate before it ever reaches a reader.
The core purpose stays the same. Technically accurate content, usable by a specific audience. But the way we get there is different.
I started this by talking about waiting. Waiting for the right person, the right meeting, the right moment to get unblocked. What I've learned is that the writers who thrive won't be the ones who wait less. They'll be the ones who bring more to the table when they show up: domain knowledge, critical thinking, a sharp eye for detail, and the ability to build workflows that catch what they miss.
The build is never finished. You just keep improving it.
That suits me fine.
I'm sharing this because I want to hear how others are navigating the same shift. There's been no shortage of hype and fear around AI in our industry, and honestly, some of that fear is justified. Across the industry, roles are changing. But changing isn't the same as disappearing. What I've experienced is less replacement and more transformation. The job looks different, demands different things, and rewards different skills than it did a few years ago. If you're in the middle of that shift too, I'd genuinely love to hear how you're doing it.
I'm currently exploring new opportunities in AI-assisted documentation, developer experience, and technical solutions roles. If this resonates, let's connect on LinkedIn.
Top comments (0)