Every AI coding session starts the same way.
You open a new chat. You explain the project. You explain the stack. You explain the decisions you made last week and why. You spend 15 minutes giving context before writing a single line of code.
Then the session ends. The context disappears. Next time, you start over.
I got tired of this. So I built a system around a single file called CLAUDE.md — and it changed how I work with AI completely.
What CLAUDE.md is
It's a plain text file that lives at the root of every project I work on. Claude Code reads it automatically at the start of every session.
Not a README. Not documentation for other developers. This file is written specifically for the AI — it contains everything the model needs to pick up exactly where we left off without me having to re-explain anything.
The difference sounds small. It isn't.
What goes in it
The file has four sections that I've refined over months of daily use.
Current State is the entry point. It's the first thing Claude Code reads and it answers three questions: what's working, what's broken, and where to pick up next session.
## Current State — last updated [date]
- What's working: Hub SSO, all agents, Log Center
- What's broken: standalone tool still on old version
- Next session: bump versions, publish release
This section gets updated at the end of every session. More on that later.
Architecture Decisions is where I explain the why, not the what. Not "we use JWT auth" but "we use JWT auth with in-memory sessions because adding a persistence layer would complicate the Docker setup for homelab users — revisit when user base grows." The reasoning, the trade-offs, the constraints that shaped the decision.
This is the section that saves me from re-litigating decisions. When Claude Code suggests something that contradicts a past decision, it sees the reasoning and understands why the alternative was rejected.
Conventions is a list of rules Claude Code must follow. Specific ones, not generic ones. Not "write clean code" — that's useless. Instead: things learned from actual debugging sessions that would take 30 minutes to rediscover. Exact patterns that must be followed. Edge cases that look fixable but aren't.
Known Issues is a list of bugs and limitations that are known but not yet fixed. This prevents Claude Code from "fixing" something that's intentionally left as-is, or spending time diagnosing something I already understand.
The forcing function problem
The obvious weakness: if you don't update the file, it goes stale. And stale context is worse than no context — it confidently points the AI in the wrong direction.
I tried discipline. It doesn't work reliably. The sessions where you forget to update are exactly the sessions where something important happened — a late-night debugging run, an interrupted session, a decision made in conversation that never touched the codebase.
The solution I landed on: ask Claude Code to update the Current State block at the end of every session before closing.
The prompt is simple:
"Before we finish, update the Current State section in CLAUDE.md to reflect what we did today, what's working, and where to pick up next time."
It takes 30 seconds. It happens while the context is still fresh. And the model is better at summarizing what just happened than I am at remembering to write it down.
This catches maybe 80% of sessions that would otherwise leave stale state. The other 20% are interrupted sessions — closed terminal, crashed IDE, ran out of time. Those you can't fully solve. But 80% is enough to make the system work.
The problem it doesn't solve
There's a class of architectural knowledge that CLAUDE.md can't easily capture: the implicit decisions.
"We avoid Y because of the incident in March" is the most important kind of architectural knowledge and the hardest to write down. It's not a pattern — it's a scar. The context that makes it meaningful lives in someone's memory, or in a post-mortem that nobody links to the codebase.
A model reading CLAUDE.md can only match against what got written. If the decision was implicit — understood by everyone who was there, never documented because it seemed obvious at the time — the model has no surface to match against.
My partial fix: at the end of sessions I ask Claude Code:
"Is there anything we decided today that we'd regret not documenting in six months?"
It catches some of it. Not all. But it asks in the right direction — not "what did we do" but "what would we wish we'd written down."
What it looks like in practice
I'm building a self-hosted Docker management platform — 13 tools, each with its own frontend, backend, agent, and central Hub integration. The kind of project where losing context between sessions would be catastrophic.
With CLAUDE.md, a new session starts like this: Claude Code reads the file, understands the current state of all 13 tools, knows the conventions for auth patterns and Docker socket connections and visual standards, and picks up exactly where we left off. No re-explanation. No re-litigating past decisions.
Today's session: 7 new tools built from scratch, all integrated into the ecosystem, all following the same design system and agent patterns. Claude Code maintained consistency across all of them because the standards were written down, not carried in my head.
Without CLAUDE.md, that consistency would have required constant correction. With it, the model enforces the standards itself.
The one-line version
CLAUDE.md is not a README. It's not documentation. It's the answer to the question the AI needs to ask at the start of every session:
"What do I need to know to be useful right now?"
Write it for that question. Update it every session. The compounding effect over months of development is hard to overstate.
Building an open source self-hosted Docker ecosystem. If you're interested in the project or in how I use Claude Code to build it, follow along.
Top comments (0)