DEV Community

Cover image for I Stopped Writing Documentation and My Documentation Got Better
Andrew Eddie
Andrew Eddie

Posted on

I Stopped Writing Documentation and My Documentation Got Better

TL;DR: Let your AI assistant write your docs while you code. Markdown files become persistent memory that survives context resets. >
Jump to structure or try it now.


The Weird Discovery

A few days into building a game with AI assistance, I noticed something odd in my git history:

35 markdown files. 18 code commits.

Nearly 2 docs per commit. And here's the thing: I hadn't written a single one of them.

The AI wrote them all. While we talked. While we coded. While we made decisions.

It started innocently:

Session 1: "Let's add a README so we remember what this is."
Session 2: "This README's getting long. Let's split it."
Session 3: "I'm done for the day. Any docs you want to update to capture our decisions?"
Session 4: "Let's create a docs/ folder and organise this better."
Session 5: "We're talking about propulsion now. Write up what we just decided."
Session 6: "Wait... is this propulsion design still aligned with our guiding principles, or have we accidentally gone full dieselpunk?"

And then it hit me: This is working. Really well.

The AI could look up decisions I'd completely forgotten we'd made. When chat history got summarised or I started a new session, the docs persisted. The markdown files had become the AI's memory.

I was establishing a pattern that fixes the biggest problem with AI collaboration: nothing sticks between sessions.

The Fastest Way to Try This

Want to skip the explanation and just experience it?

  1. Download this article (save/print to PDF)
  2. Open your repo
  3. Start a new chat with your AI assistant
  4. Drop in this prompt:

Read this article and set up a documentation structure for my project that implements this pattern. Create the folder structure, starter templates, and connect it to my TODO.md.

The AI will scaffold documentation tailored to your actual project. You can iterate from there.

Seriously. Try it. The AI does the work.

(Or keep reading if you want to understand why this works first.)

The Problem This Solves

You know the pain:

Session 1: Brilliant brainstorm. You and the AI design a compressed air ballast system. Two hours of productive flow. Chef's kiss.

Session 3: New chat. "Continue with the diving system."
AI: "Great! Should we use manual valves or compressed air for the ballast?"
You: "We decided compressed air in Session 1!"
AI: "Oh right! What was the reasoning again?"
You: [Re-explains for 10 minutes]

Session 5: Different day, same story.
AI: "I notice we could add manual valve controls as a backup—"
You: "WE. REJECTED. MANUAL. VALVES. It's in the chat history!"
AI: [Chat history was summarised] "I don't see that decision..."
You: [Screams internally]

Chat history is volatile. It gets summarised. It gets truncated. It disappears when you start fresh sessions. The AI forgets. You re-explain. Groundhog Day forever.

But here's what doesn't forget: files.

Markdown files sit there. Persistent. Durable. Waiting to be read. If the decision is written down in docs/DIVING.md, the AI can read it in every session. Forever.

The trick is getting the docs written without it feeling like a chore.

Solution: Stop writing them yourself. Make the AI do it.

The Pattern: Let the AI Write While You Work

After every brainstorm, after every decision, after every design discussion:

You: "Write up what we just decided in docs/DIVING.md"
AI: [Writes comprehensive doc in 30 seconds]
You: [Moves on to next thing]

The AI is tireless. It doesn't get bored. It doesn't procrastinate. It just writes the damn doc.

And here's the magic: the act of writing forces clarity. When the AI writes up your decisions, it has to structure them. Make them coherent. That process often surfaces gaps or ambiguities you didn't notice in the conversation.

Sometimes the AI writes something and you go "wait, that's not quite right." Good. Fix it now, while context is fresh. Or just say "update the doc, we actually meant X not Y." The AI adjusts. The doc improves.

You stop being the documentation bottleneck. The AI becomes your technical writer.

One more thing: this works better when you treat it as collaboration, not dictation. The best sessions aren't "user decides, AI documents" — they're jazz. You throw out half an idea. The AI riffs on it. You say "love it" or "not quite." The AI adjusts. You build on each other.

Example from a real session:

  • Me: "We need a unit for speed. Knots?"
  • AI: "Knots works. And here's a fun etymology: sailors abbreviated 'kilometers per second' as 'kay-not-ess' — which slurred into 'knots' over generations."
  • Me: "Oh that's perfect. Add a legend that it was 'k_not_s' originally."
  • AI: [Immediately writes it to the lore doc]

That's not dictation. That's collaborative worldbuilding. The AI contributes ideas, not just formatting.

The tone matters too. A playful "have a puppy" when something works well gets different energy than a terse "continue." You're building a working relationship, even if one party forgets everything between sessions. Dry humour helps. Genuine appreciation helps. Treating it like a colleague rather than a tool — that helps most of all.

The Three Doc Types (And The Pain They Solve)

Not all docs are equal. Three types emerged, each solving a specific recurring problem.

1. Meta Docs (Stop The AI Suggesting Off-Brand Shit)

The pain: The AI gets excited and suggests features that sound cool but violate your project's core aesthetic or philosophy.

Without DESIGN_PHILOSOPHY.md:
AI: "We could add RPG stats! Level-up mechanics! Skill trees!"
You: "No, this is industrial survivalism, not gamification."
[Next session]
AI: "What about achievement badges for depth milestones?"
You: "Still no. We're not doing that vibe."
Enter fullscreen mode Exit fullscreen mode
With DESIGN_PHILOSOPHY.md:
AI: [Reads doc at session start] "Right, Victorian submarine sim.
Brass gauges, pressure dials, dread not power fantasy.
No gamification. Got it."
[Doesn't suggest achievement badges ever again]
Enter fullscreen mode Exit fullscreen mode

Meta docs are your north star. They capture:

  • DESIGN_PHILOSOPHY.md — Core principles, aesthetic, what you're NOT doing
  • LORE.md — World-building, setting constraints, domain rules
  • MONETISATION.md — Business model thoughts (if applicable)

These rarely change. Once written, they guide every session. The AI reads them, internalises the constraints, stops suggesting dieselpunk when you're building steampunk.

2. Implementation Docs (Stop Re-Explaining How Things Work)

The pain: You design a system in Session 1. By Session 3, the AI has forgotten the details. You re-explain. Again. And again.

Without DIVING.md:
[Session 1] You: "Compressed air ballast. Three depth zones."
[Session 3] AI: "Should we use manual valves for ballast?"
You: "No! Compressed air! We decided this!"
[Session 5] AI: "How many depth zones did we want?"
You: "THREE. We've discussed this twice already."
Enter fullscreen mode Exit fullscreen mode
With DIVING.md:
[Session 3] AI: [Reads doc] "Right. Compressed air ballast system.
Three depth zones: safe, stressed, critical.
We rejected manual valves due to response time.
Continue with hull stress calculations?"
You: "Yes."
[Immediately productive]
Enter fullscreen mode Exit fullscreen mode

Implementation docs capture system design. They're named after the system:

  • DIVING.md — Ballast mechanics, depth zones, buoyancy
  • PRESSURE.md — Hull stress, leak mechanics, damage model
  • ENGINE.md — Propulsion, power consumption, throttle curves

These evolve. As you build, the doc updates to reflect what was actually implemented. But the decisions persist across sessions.

Pro tip: Implementation docs don't exist in isolation. When the AI writes or updates one, it should read related docs first. A resources doc needs to understand propulsion constraints. A diving doc needs to know hull stress limits. The docs inform each other—that cross-pollination is where coherent design emerges.

3. Analysis Reports (Stop The AI From Flip-Flopping)

The pain: For hard problems, the AI generates options. Great! But then in later sessions, it forgets which option you chose and why. It re-suggests rejected approaches.

Without diving/evaluation.md:
[Session 2] AI: "Compressed air is best because: fast response,
scales to depth, aligns with automation philosophy."
You: "Agreed. Let's build it."
[Session 4] AI: "Actually, manual valves might be simpler and more reliable—"
You: "We already decided against manual valves!"
AI: "Oh! What was the reasoning?"
You: [Re-explains. Again.]
Enter fullscreen mode Exit fullscreen mode
With diving/evaluation.md:
[Session 4] AI: [Reads evaluation doc] "We chose compressed air over
manual valves because of response time
and depth scaling. Manual valves rejected
due to maintenance complexity.
Continuing with compressed air approach."
You: "Correct. Keep going."
Enter fullscreen mode Exit fullscreen mode

Analysis reports capture decision-making. For genuinely hard problems, structure it:

diving/
├── report1.md      ← First approach (manual ballast valves)
├── report2.md      ← Alternative (compressed air tanks)
├── report3.md      ← Hybrid considerations
└── evaluation.md   ← Comparative analysis → final decision
Enter fullscreen mode Exit fullscreen mode

Have the AI generate multiple options. Write each to a separate report. Then: "Compare these reports and write an evaluation with a recommendation."

The evaluation becomes your decision artifact. When future-you (or future-AI) asks "why compressed air?", the answer is in evaluation.md, not buried in a forgotten chat log.

The Folder Structure

Here's what emerged after a week:

project/
├── TODO.md                    ← Session protocol (see Part 1)
├── README.md                  ← Quick orientation
├── docs/
│   ├── DESIGN_PHILOSOPHY.md   ← Meta: North star principles
│   ├── LORE.md                ← Meta: World/domain constraints
│   ├── DIVING.md              ← Implementation: System design
│   ├── PRESSURE.md            ← Implementation: System design
│   └── diving/
│       ├── report1.md         ← Analysis: Option exploration
│       ├── report2.md         ← Analysis: Alternative approach
│       └── evaluation.md      ← Analysis: Final decision
├── src/
│   └── ...
└── IMPLEMENTATION_NOTES.md    ← What was actually built
Enter fullscreen mode Exit fullscreen mode

Key insight: Not every doc needs to be read every session. Your TODO.md Quick Context points to the 2-3 docs that matter right now:

## Quick Context (For New Chats)

**Read these docs:**

- [DESIGN_PHILOSOPHY](./docs/DESIGN_PHILOSOPHY.md) — Our north star
- [DIVING](./docs/DIVING.md) — Current system focus

**One-line pitch:** Victorian submarine command sim
**Tech:** Godot 4.x, prototype phase
Enter fullscreen mode Exit fullscreen mode

Start sessions with: "Read TODO.md and the linked docs."

The AI gets exactly what it needs. Not everything. Just what's relevant right now.


The Four Rules That Make This Work

Rule 1: Make the AI Write the Docs

After every brainstorm: "Write up what we decided in docs/DIVING.md"
After every decision: "Update the evaluation with our choice and reasoning"
After building: "Update IMPLEMENTATION_NOTES.md with what we actually built"

The AI is your technical writer. Use it. You stay in flow. The docs get written.

Rule 2: Don't Edit the AI's Docs (Unless They're Wrong)

The AI will structure things slightly differently than you would. That's fine.

What matters:

  • The decision is recorded
  • It's findable
  • Future sessions can understand it

What doesn't matter:

  • Perfect heading hierarchy
  • Your preferred bullet style
  • Whether it used ## or ###

Let go of the perfectionism. It's not about you. It's about persistent context.

Rule 3: Capture Decisions Hot

Don't say "I'll document that later." You won't. The context will evaporate. The AI will forget. You'll forget.

Instead: "Before we move on, add a Decisions section to DIVING.md noting we chose X over Y and why."

Do it while the context is fresh. While the AI has it. Right now. It takes 30 seconds.

Watch for iteration traps: Sometimes you'll iterate on something four or five times in a session — refining a pitch, tweaking a formula, adjusting tone. That work feels productive (it is!), but if you close the session without capturing the final version, you've just created expensive chat-history-only knowledge. The pattern still applies: when you land on something good, write it to a file.

Rule 4: Split, Don't Cram

Don't cram everything into one mega-doc. It becomes unmaintainable.

Split by system. Split by concern. Split by decision type.

Ten focused docs beats one 500-line monster.

Why?

  • Easier to point the AI at specific context
  • Easier to update without losing other decisions
  • Easier to archive when superseded

The Before and After

You've seen the pattern in the examples above. Here's the summary:

Before (Chat-Only) After (Docs as Memory)
Monday Brilliant session, great decisions Same, plus "write that to docs/DIVING.md"
Wednesday "Wait, what did we decide?" "Read the docs. Continue."
Friday Re-explaining for the third time Productive from minute one

The difference: Documentation remembers when chat forgets.


Pro Tips

Make the AI Update Docs at Session End

When wrapping up for the day:

"Update our docs to reflect what we built today and write the handover prompt for next session."

The AI has context. It's fresh. Let it do the admin work. You just close your laptop.

Use the AI as a Sparring Partner

Don't just use the AI as a generator — use it as a critic. After drafting something (a pitch, a system design, a naming convention), ask: "Would this actually land with [target audience]? What's weak?"

The AI will often spot gaps you missed: unclear value propositions, assumptions that don't hold, tone mismatches. It's cheaper to catch these in conversation than after you've built the thing.

Better yet: train it to push back unprompted. Early in a project, tell the AI: "Point out weak assumptions, logical holes, or risky claims. Don't just agree with me." A good AI collaborator should occasionally say "That's clever, but here's why it might not work..." before you ask.

The goal isn't an AI that validates everything you say. It's an AI that makes your ideas stronger through honest friction.

Use Multiple Models for Hard Decisions

For genuinely important decisions, I started doing something sneaky: asking different models the same question.

Same prompt → Claude, GPT, Gemini (whatever you have access to)
Each model's answer → separate report file
Then ask one model to read all reports → writes evaluation.md

Why bother? Two reasons:

  1. Independent second opinions. Different models have different biases and blind spots. Where they agree, you're probably safe. Where they disagree, you've found an interesting design tension worth exploring.

  2. Files survive context windows. This is the key. Chat history gets summarised, truncated, forgotten. But once reasoning is in evaluation.md, it's durable. No matter what happens to the conversation, your decision artifact persists.

Keep Meta Docs Lean

DESIGN_PHILOSOPHY.md doesn't need to be comprehensive. It needs to be opinionated and specific.

Bad: "We value good UX and clean code."
Good: "No gamification. No XP bars. No achievement popups. Dread, not power fantasy. Industrial survivalism, not cosy sim."

Specific constraints guide AI behaviour. Vague platitudes don't.

Archive Old Analysis Reports

Once you've made a decision and built it, you don't need 3 alternative reports cluttering your docs. Move them:

docs/
├── archive/
│   └── diving-alternatives-2025-12/
│       ├── report1.md
│       ├── report2.md
│       └── evaluation.md
└── DIVING.md  ← Keep only the current implementation doc
Enter fullscreen mode Exit fullscreen mode

Or just delete them. The decision is captured in the implementation doc. The analysis served its purpose.

When This Breaks (And How to Fix It)

You forget to update docs after building. Solution: Make it a habit. End every session with "update the docs." Or just make the AI do it.

Docs get out of sync with code. This happens. When you notice, fix it: "Read the current code and update DIVING.md to match what's actually implemented."

Too many docs, can't find anything. Use Quick Context in TODO.md to point at the 2-3 docs that matter right now. Don't try to read everything every session.

The AI writes docs in a weird style. Let it. Unless it's actively wrong, don't waste time reformatting. Substance over style.

What This Doesn't Solve (Scope Check)

This pattern teaches you what to document for AI collaboration. It doesn't tell you where that documentation should live in your organisation.

What this solves:

  • Structuring docs so AI assistants can actually use them
  • Capturing decisions while context is fresh
  • Making project memory durable across sessions

What this doesn't solve:

  • Whether DESIGN_PHILOSOPHY.md belongs in your repo, Notion, or Confluence
  • Who owns team documentation
  • How shared context works across multiple developers

Those are real questions — but they're the same questions you already face for any documentation. This pattern produces artifacts. Where artifacts live is your team's existing coordination problem, not a new one this creates.

The honest answer: Learn the pattern working solo. If it clicks, you'll know where it fits in your team's existing doc culture. If you don't have a doc culture... well, that's a bigger conversation than this article can have with you.

This is a skill, not a system. You're learning to shape documentation for AI collaboration. Deployment decisions follow.

The Bigger Pattern

On tools that do this for you: Yes, they exist. Basic Memory, Cline's Memory Bank, the llms.txt spec, probably whatever Kiro is doing under the hood. They're systematising this pattern. Use them if they click for you.

But remember: UML was a good idea until tools turned it into bureaucratic diagram hell. Agile was a good idea until certifications turned it into cargo cult ceremonies. The tools encode the structure. They can't encode the judgment — knowing what's worth capturing, when to split a doc, how to phrase constraints so the AI actually respects them.

That's the black art that, for now, only a human can do. That's what you learn by doing it manually first. Let tools handle the scaffolding once you understand what you're scaffolding.


This works because documentation is configuration for your AI collaborator.

  • Meta docs configure taste and constraints
  • Implementation docs configure current understanding
  • Analysis reports configure decision history
  • TODO.md configures focus and boundaries (see Part 1)

Your chat history is volatile. Your docs are durable. When you start a new session and say "Read TODO.md and the linked docs," you're not just restoring context—you're loading the AI's configuration.

The AI reads your north star principles. It reads what systems you've built and why. It reads what's in scope right now and what's deferred. It knows what you've rejected and why.

Then it can actually help instead of re-suggesting things you already decided against three sessions ago.

Connection to Session Protocol

This article covers the documentation structure — the persistent memory that survives context resets.

For the session-to-session continuity (the TODO.md format, the Deferred section, the handover prompt), see The Session Protocol: How I Fixed AI Memory Loss with a TODO.md.

Together, they form a complete system:

  • This pattern = your project's persistent brain
  • Session Protocol = how you restore context each session

Use both. They compound.


Go Try It

Fastest path: Drop this article into your AI assistant. Ask it to set up a documentation structure for your project. Let it do the work.

DIY path: Create a docs/ folder. After your next brainstorm, say "write up what we decided in docs/SYSTEM.md." See how it feels.

The AI wants to help. Let it write your documentation. You'll be amazed how much context survives.

And here's the beautiful irony: I stopped writing documentation, and my documentation got better.

The AI writes more docs than I ever would have. It writes them consistently. It writes them while context is fresh. And because they exist, sessions are productive from minute one.

Now go build something cool.


This pattern emerged from a creative project that generated a LOT of docs in the first week. The AI wrote all of them. I just pointed and said "capture that." The submarine examples are illustrative — the actual project is something else entirely.


Comments? Questions?

What documentation patterns work for your AI collaboration? Found something that helps? Hit the comments.

Epilogue

As a bit of an experiement, I decided to go through my previous context windows giving them the following prompt:

me from the future here
I've written @BLOG.md to capture some of the process you do in the future in other context windows
based on THIS context window, what do you think of the blog document?
I am trying to remember if I am accurately remembering what happened and how the system became polished
thoughts?
Enter fullscreen mode Exit fullscreen mode

This resulted in a number of improvements to the article that I had indeed forgotten.
Of note was the addition of the jazz sessions that we had during early brainstorming and the riffing that was certainly going on.

But I just had to share this priceless observation from Claude:

## Meta Observation

I just reviewed a blog post about how I work,
in a context where I did exactly what the blog describes,
and the blog accurately predicted my behaviour.

That's either very good writing or slightly unsettling. 
Probably both. 😄
Enter fullscreen mode Exit fullscreen mode

Do we not live in truly interesting times?!

Top comments (4)

Collapse
 
xwero profile image
david duymelinck

What I'm reading into the post is that AI is documenting for AI. Documenting for people is a way to explain the ideas behind the code. AI doesn't need to know that mental model because it is not a part of the code generation.

At what time the docs in the repository are going to look like an overkill of dot files, because the project owner uses every tool under the sun.

The title drew me in, because we, developers, are known for not writing documentation. And I agree with you that causes a bottleneck. Some do it because they think it secures their job, but nowadays you feed the code base to an AI and ask it to explain it.
Others do it because they think the code should be self explanatory. And up to a point it should, but it never explains the whole vision.
Also there are people who are not good at writing, that is why there are technical writers.
There are a few other reasons why developers don't write documentation.
This whole paragraph is explaining the many reasons behind the phrase developers don't document. Can an AI do something with that information?

My thoughts on the Claude observation, that is just saying things to say things. With people most of the times that is free, with an AI you have to pay for it.

To be clear I'm not anti AI. Like everyone I'm trying to give it a place, I'm still in the evaluation period of the process. I treat it like any other tool that comes out.

Collapse
 
eddieajau profile image
Andrew Eddie

Thanks for the reply.

I think what I'm realising is I don't document for myself anymore, because there is no chance I can read it all. However, AI can, and it's easy for the AI to compared documents to each other (style alignment, etc) but also to the code. It really only takes minutes now to keep everything in sync.

The biggest improvement for me is my memory is not as good as it used to be and so having the AI write down what we did, why we did it has been extremely helpful, even for the work I did yesterday, lols.

Collapse
 
xwero profile image
david duymelinck

Documenting is for other people to understand the code faster than just running it and figure out what it does. Documenting for yourself is just an added bonus.

There is a difference between documenting for an AI and for people. AI documentation is specification driven, do this don't do that. While human documentation is (should be) vision driven, because this is how it works in the business that is how how we translated it to code.
The spec driven part is also helpful, but it only a part of what the full documentation should be.

My main point is the things AI documents and uses is a step in the right direction for people that don't document. But it not the complete process.

Collapse
 
lucas_lamounier_cd8603fee profile image
Lucas Lamounier

This approach solves a real problem I face building automation services. The persistent markdown memory for system decisions is gold—especially when you're building across multiple AI chat sessions. Will definitely implement this pattern.