Originally published on Hashnode. Cross-posted for the DEV.to community.
Two years ago I inherited a project from an engineer who had left the company. The codebase was clean. The test coverage was reasonable. The architecture was defensible. The documentation was a single README that said "TODO: write docs." There were 200 commits, three deployment environments, a set of cron jobs, and a database schema with 47 tables. None of it was documented.
I spent six weeks figuring out how the system worked before I felt comfortable making changes. Six weeks. The original engineer had probably written the whole thing in three months. I lost a sixth of his entire build time to the absence of a document he could have written in an afternoon.
That experience changed how I think about documentation. Documentation is not a nice-to-have that you write when you have time. Documentation is a force multiplier for everyone who comes after you, and the math for whether it is worth writing is almost always overwhelming. The reason most teams ship without documentation is not that the math is bad. It is that writing documentation is tedious, and the people who would benefit from it are not in the room when the decision is made.
Claude Code changed this for me. Documentation that used to take an afternoon now takes 15 minutes. Documentation that I would have skipped because the cost was too high now gets written because the cost is trivial. Here is the workflow.
Why Documentation Goes Unwritten
Most engineers do not skip documentation because they think it is unimportant. They skip it because the cost feels disproportionate to the benefit at the moment they would have to write it. You just shipped a feature. You are tired. The next feature is already lined up. The documentation is for some hypothetical future engineer who probably will not need it. You skip it.
Six months later, you are that engineer. You stare at the code you wrote and try to remember why a particular decision was made. You cannot. You spend an hour reverse engineering your own thinking. The cost was real. It was just deferred.
The second reason documentation goes unwritten is that the kind of documentation engineers can write quickly is the kind of documentation that nobody reads. Inline comments are easy and largely useless. JSDoc blocks that restate the function signature are easy and largely useless. The documentation that actually helps people is the documentation that captures intent, context, and tradeoffs. That kind of documentation is hard to write because it requires you to step out of implementation mode and think about what someone else would need to know.
The third reason documentation goes unwritten is that there is no obvious place to put it. Should it go in the code as comments? In a docs folder as markdown? In a wiki? In a knowledge base? Each option has tradeoffs and most teams pick one and then regret it later. The friction of figuring out where the documentation belongs is enough to make people skip writing it.
The cost of documentation feels high in the moment of writing it and low when reading it. The cost of missing documentation feels low in the moment of skipping it and high every time someone has to reverse engineer the missing context.
Claude Code does not change the math on whether documentation is worth writing. The math was always overwhelming. Claude Code just makes the writing fast enough that the in-the-moment cost stops being a barrier.
The Module Documentation Skill
When I finish a module, I run the module documentation skill. The skill takes the module source code and produces a markdown document with the following sections.
The first section is what this module does, written in two to four sentences. Not what each function does. What the module as a whole accomplishes. This is the section that future engineers read first to decide whether this module is the one they need to be looking at.
The second section is the public interface. What can callers do with this module? What are the inputs and outputs? What are the error conditions? This section is what Claude Code generates well from code, because the public interface is mostly mechanical.
The third section is the design choices. Why was this module structured this way? What alternatives were considered? What tradeoffs were made? This section is the one that requires actual thought, and it is the one Claude Code does not generate automatically. I write this section as a prompt for Claude Code to fill in based on context I provide. Sometimes I dictate a paragraph and ask Claude Code to clean it up. Sometimes I ask Claude Code to read the code and propose what the design choices probably were, which I then correct.
The fourth section is the gotchas. What surprised me about this module? What is non-obvious? What edge cases caused bugs that I had to fix? This section is the most valuable for future maintenance and the easiest to forget to write, because the gotchas seem obvious to me right after I have just dealt with them.
The fifth section is the change history. Major versions, the reasons for them, and links to the PRs. This is what tells future engineers whether the current behavior is the original intent or a deliberate departure from it.
The skill produces a draft of all five sections. I review the draft, fix the parts Claude Code got wrong, fill in the parts Claude Code could not infer, and commit the file alongside the module. The whole process takes 15 minutes for a module that took me a day to write. The ratio is right.
The README Skill
Every repository should have a README that someone unfamiliar with the project can read in five minutes and walk away with a working mental model. Most repositories do not have this README. They have either a stub README that says "this is the [project name] repository" or a sprawling README that tries to be comprehensive and ends up being unreadable.
The README skill takes the repository structure, the package configuration, the recent commit history, and any existing documentation, and produces a draft README with these sections.
A one-paragraph description of what the project is and who it is for. The audience matters more than the description. A README that does not tell me whether I am the intended audience is a README I will skim and forget.
A quick start guide that walks through the most common setup path. Not every possible setup path. The one that 80 percent of new contributors will use. The other paths can have their own dedicated documentation pages.
A high-level architecture overview. Three to five sentences about the major components and how they fit together. This is the section that helps somebody figure out where to look when they want to make a change.
A pointer to the deeper documentation. The README is a starting point, not a comprehensive guide. It should make it easy to find the deeper material when the reader needs it.
A contribution guide. How are issues tracked? What is the PR process? What conventions does the team follow? This section is what makes the difference between a repo that strangers can contribute to and a repo where strangers bounce off without contributing.
The skill produces a complete first draft. I edit it, sometimes substantially, and commit. The README that used to take a half day to write now takes 30 minutes including my edits. More importantly, the README actually exists, which is a meaningful improvement over the previous baseline.
The Claude Code memory files workflow is what makes the README skill produce useful output instead of generic boilerplate. Claude Code reading the project context once and remembering it across documentation tasks is what changes the output quality.
The Architecture Decision Record Skill
Some decisions deserve a permanent written record. Not every decision. The decisions where future engineers might wonder "why did we do it this way" and where the answer is non-obvious. Architecture Decision Records (ADRs) are the standard format for this kind of documentation, and they are profoundly underutilized.
The ADR skill takes a brief description of a decision, the context that led to it, the alternatives considered, and the tradeoffs accepted, and produces a properly formatted ADR. Each ADR has a number, a title, a status, a date, the context, the decision, the consequences, and the alternatives.
The reason ADRs are underutilized is that the format feels heavyweight relative to the value of any individual decision. Engineers think "this decision is not big enough to deserve an ADR" and so the ADR does not get written. Six months later the decision turns out to have been bigger than they thought, and now there is no record.
The skill changes this calculus. Writing an ADR no longer takes 30 minutes. It takes five. The threshold for "big enough to deserve an ADR" can drop accordingly. I now write ADRs for decisions I would have left undocumented two years ago, and the ADRs are paying off in conversations where I can point to the document instead of trying to reconstruct the reasoning.
The format I use:
# ADR 042: Use cursor pagination for the orders API
Status: Accepted
Date: 2026-04-15
## Context
The orders API returns lists of orders to mobile clients. Order
volume is high enough that offset pagination causes issues at
high page numbers (slow queries, inconsistent results across
pages when new orders are inserted).
## Decision
Use opaque cursor-based pagination. Cursors are base64-encoded
JSON containing the last-seen order id and timestamp.
## Consequences
- Clients cannot jump to arbitrary pages, only navigate forward
- Cursors are stable across data changes
- Cursor format is not part of the public contract and may change
- Migration from offset pagination requires a deprecation window
## Alternatives considered
- Offset pagination: rejected due to performance and consistency
- Keyset pagination with exposed keys: rejected due to leaking
internal id format to clients
- Time-based pagination: rejected because orders within the same
millisecond can collide
This format is short enough that writing it does not feel like a chore. It is structured enough that future readers can find the parts they care about quickly. The skill produces drafts in this format from a brief verbal description of the decision.
The API Documentation Skill
API documentation is its own discipline. Module documentation tells you how a piece of code works internally. API documentation tells you how to call a piece of code from outside it. The two have different audiences and different requirements.
I covered API documentation in detail in my Claude Code for API design article. The short version is that API documentation should be generated from specifications, not from code, and the specifications should be written before the code. Claude Code makes both halves of that workflow practical.
The relevant skill for this article is the one that takes existing code that does not have specifications and reverse-engineers documentation from it. This is what you do when you inherit an undocumented API and need to bootstrap documentation without rewriting everything from scratch.
The skill reads the route handlers, the request validation, the response shapes, and the tests, and produces a draft specification document for each endpoint. The draft is incomplete because the code does not always tell you the full story. Authentication requirements might be enforced by middleware that is not visible in the route handler. Idempotency behavior might be implicit in the database constraints. Error responses might depend on conditions the code only handles indirectly.
I review the drafts and fill in the gaps. The drafts get me 70 percent of the way there. Closing the last 30 percent is the part that requires my judgment. But starting from a 70 percent draft is dramatically faster than starting from nothing.
The Tutorial Skill
Reference documentation tells you what is possible. Tutorials tell you how to actually do something useful. Most projects have reference documentation and no tutorials, which is why most projects have a steep onboarding curve.
The tutorial skill takes a goal ("connect this service to a Postgres database with TLS," "set up authentication with custom JWT claims," "deploy this service behind a load balancer") and produces a step-by-step tutorial with code examples, explanations, and troubleshooting tips for the common failure modes.
The tutorials are not autogenerated content with empty filler. They are actual narratives that walk a reader from a starting state to a completed setup, with the reasoning visible at each step. The skill produces these narratives by reading the code, the existing documentation, and the issue tracker (where troubleshooting tips often live as resolved tickets).
I edit the tutorials before publishing. Sometimes I add screenshots. Sometimes I correct steps that Claude Code got slightly wrong because the documentation was outdated. But the structure is sound and the content is mostly correct, which is what matters. Tutorials I would not have written because the cost was too high now exist because the cost is trivial.
If you are starting a new project and want documentation built into the workflow from day one, the CLAUDE.md context file pattern is how you make Claude Code understand your project well enough to produce documentation that does not feel generic.
The Inline Comment Skill
Inline comments are a paradox. Most inline comments are noise. Comments that restate what the code already says are worse than no comments because they take up space and rot when the code changes. But the inline comments that explain non-obvious decisions are gold. The trick is writing the second kind without writing the first.
The inline comment skill reads code and proposes inline comments only for the lines where context is genuinely missing. Hidden constraints. Subtle invariants. Workarounds for specific bugs. Behaviors that would surprise a reader. Things that an engineer reading the code six months from now would wonder about.
The skill is conservative by design. If it is not sure that a comment adds value, it does not propose one. The proposed comments are short, factual, and focused on the why rather than the what.
I review the proposals and accept the ones that make sense. Usually I accept three or four out of every ten proposed. The rest I either reject (the comment was redundant) or modify (the comment had the wrong emphasis). The result is that the code has comments where comments are useful and is comment-free where comments would be noise.
This is the kind of detail work that I would never have time to do manually but that meaningfully improves the readability of code I revisit months later.
The Changelog Skill
Changelogs are documentation that nobody writes and everybody wants. Users want to know what changed in the version they just upgraded to. Maintainers want to remember why they made certain changes when they look back at the version history. Both groups are usually disappointed.
The changelog skill takes the commit history between two release tags and produces a human-readable changelog with sections for new features, improvements, bug fixes, breaking changes, and deprecations. The classification is based on the commit messages and, when those are inadequate, the actual code changes.
The skill is not magic. It cannot tell you which changes are exciting and which are boring. But it can produce a complete first draft that captures the structural changes accurately. I edit the draft to add commentary, group related changes, and highlight the things users actually care about. The whole process takes 20 minutes per release. Without the skill, it would take two hours, which is why I used to skip it.
The Cost of This Workflow
The total time investment to set up the module documentation, README, ADR, API documentation, tutorial, inline comment, and changelog skills was about two days. Most of that was iterating on the prompts to produce output I trusted. The ongoing cost is essentially zero. The skills run as part of my normal development flow.
The benefit is that the projects I work on now have documentation. Not perfect documentation. Not comprehensive documentation. But the kind of documentation that makes a difference for the next engineer who has to work on the codebase. The README explains what the project is. The module documentation explains how the modules work. The ADRs capture the major decisions. The tutorials cover the common workflows. The changelog tracks the releases. The inline comments illuminate the non-obvious lines.
Six weeks of context recovery, like the project I inherited two years ago, would not happen with this workflow. The original engineer would have run the skills as part of finishing the project, the documentation would have been comprehensive enough that I could have onboarded in days rather than weeks, and the company would have gotten back five weeks of my time that they instead spent on me reading code.
The Bottom Line
Documentation is a leverage activity that most engineers skip because the in-the-moment cost feels too high. The cost was always lower than the benefit. Claude Code makes the cost actually low, which removes the last excuse for skipping it.
If you have ever inherited an undocumented codebase, you know how much time gets lost to the absence of context. The engineers who came before you were not lazy or careless. They were busy and the documentation was the thing they could safely skip. Claude Code removes "safely skip" as an option by making documentation cheap enough that there is no longer a reason to skip it.
If this resonates and you want to build a documentation pipeline into your team's workflow, the Claude Code skills guide shows how to package these workflows so that every engineer on the team gets the leverage automatically. The hardest part of documentation is making it routine. Skills make it routine.
The codebases I am proudest of are the ones future engineers will actually be able to read. Claude Code is what makes that possible.
Top comments (0)