DEV Community

Cover image for Writing for humans is no longer enough. Writing for AI is now part of the job
Siddhartha Mani
Siddhartha Mani

Posted on

Writing for humans is no longer enough. Writing for AI is now part of the job

When my company integrated an AI assistant into our documentation platform, I was part of one of the early teams selected to test it. The goal was to see how well the AI could answer real user questions by using our existing documentation as the source of truth.
What I did not expect was how much this exercise would change the way I think about technical writing.
This was not just about AI accuracy. It was about how documentation itself behaves when consumed by large language models (LLMs).

LLMs are not deterministic, and that's not a bug

One of the first things I noticed while testing the AI assistant was that it sometimes gave different answers to the same question, even when the same question was asked, the same documentation was used, the same vector database powered retrieval, and the same underlying model was running.
At first, this looked like a reliability issue. But in reality, it revealed something important: LLMs are probabilistic, not deterministic.
They generate answers based on query phrasing, retrieved context, token probability, and semantic similarity search. If the content they retrieve is ambiguous, thin, or poorly scoped, the answers will vary.

Minimalist writing works for humans but not always for AI

Our documentation followed a clean, minimalist writing style. It was optimized for human readers with short pages, fewer words, and fewer explanations.
But AI struggled with it.
Why? Because minimalist documentation often lacks explicit topic intent, clear product boundaries, disambiguating context, and short summaries that explain what the page is really about.
Humans infer meaning from experience. LLMs don't.

Common terminology confuses AI more than you think

One recurring issue came from shared terminology across products. For example, terms like virtual server, instance, node, and environment existed across multiple products in our ecosystem.
When users asked, "How do I create a virtual server?" the AI sometimes pulled steps from the wrong product, simply because multiple pages used the same wording and the pages didn't clearly state which product they belonged to.
The AI wasn't hallucinating. It was retrieving valid but irrelevant content.

The fix was surprisingly simple, add more context

We ran a small experiment on selected pages. We added concise descriptions at the beginning of each page, introduced short summaries for all topics and subtopics, enhanced step-level descriptions for better clarity, and reduced unnecessary cross-references.
The result? AI answers became more accurate, search relevance improved, fewer mixed-product responses occurred, and higher trust in AI-generated answers developed for users.
After seeing the improvement, we rolled this approach across more of the documentation set.

Cross-references can hurt AI retrieval

Another lesson we learned that too many cross-references increase AI confusion.
When pages heavily referenced other topics, the AI sometimes generated answers from linked pages instead of the main topic, partial steps were mixed together, and context was lost.
We adjusted our approach by using cross-references only when necessary, keeping primary workflows self-contained, and avoiding circular linking. This reduced retrieval noise and improved answer consistency.

What this means for technical writers

This experience reinforced something important: technical writers are no longer writing only for humans. We are also designing knowledge for AI systems, reducing ambiguity for machine retrieval, and defining how trustworthy AI answers can be.
Good AI answers don't start with better models. They start with better documentation.

Practical writing tips for AI-ready documentation

If you're a technical writer working with AI-assisted documentation, here's what helps:

  • Add short, explicit descriptions to every topic.
  • Clearly state product scope early in the page.
  • Avoid assuming the reader (or AI) knows the context.
  • Disambiguate common terms used across products.
  • Balance minimalist flow with meaningful context.
  • Reduce unnecessary cross-references.
  • Write topic introductions as "retrieval anchors."

Final thought

LLMs don't replace technical writers. They amplify the quality of the documentation we create.
When documentation is clear, scoped, and intentional, AI becomes a powerful assistant. When it isn't, AI simply reflects the confusion already present in the content.

Top comments (0)