Six months ago I shipped a project that I was proud of. Claude wrote about 80% of the code. It worked great on launch day.
Three weeks later I couldn't add a single feature without something else breaking.
This wasn't a prompt quality problem. I was writing detailed prompts. I was being specific about what I wanted. The issue was something subtler, and once I understood it, I changed how I work completely.
Here's what I figured out.
The real problem isn't the AI. It's the handoff.
When you work with Claude on a project, there's an invisible handoff happening at every step: you're transferring the responsibility for design decisions to the model, and the model is making those decisions based on what's locally coherent — not what's globally maintainable.
Every session, Claude starts fresh. It doesn't know why your auth module was structured a certain way last Tuesday. It doesn't know that you made a specific tradeoff in your database layer because of a constraint you discovered three sessions ago. It fills in those gaps with plausible guesses.
Individually, those guesses are fine. Accumulated over 30 sessions? You have a codebase that works but that nobody — including you — fully understands.
That's not AI failure. That's a workflow problem.
What I changed: session hygiene before anything else
The thing that helped most wasn't better prompts. It was treating each Claude session as a handoff between two contractors who don't share memory.
Before I start any session that touches existing code, I now write a brief context block. Not a massive doc. Just 5–10 lines covering:
- What module I'm working on
- What it's allowed to do (and what it's explicitly not responsible for)
- Any constraints that aren't visible from the code itself
- What "done" looks like for this session
This sounds bureaucratic. In practice it takes two minutes and it changes the quality of what Claude produces significantly. Not because the model suddenly became smarter — but because it stopped having to guess at things I should have told it.
The confidence trap
Here's the part I wish someone had told me earlier: Claude is confident by default.
When it doesn't know something, it doesn't usually say "I'm not sure how this fits into your architecture." It builds something that fits a reasonable interpretation of what you asked. That output looks right. It passes your quick review. You move on.
The cost doesn't show up until you're three features deeper and the whole thing starts fighting itself.
The fix is building in a verification step that isn't "does this look okay to me right now." It's closer to: "does this follow the structural contract this module was supposed to have?" That's a different question, and it requires you to have that contract written down somewhere before you generate the code.
Scope containment is a superpower
The other shift that helped: I stopped asking Claude to "implement feature X" and started asking it to "implement the data layer for feature X, nothing else."
Smaller scope, tighter output, easier review. When each session has a single clearly bounded job, the integration work becomes predictable. When sessions are open-ended, you get code that assumes things about adjacent parts of your system — assumptions you won't discover until something breaks in production.
This isn't about distrusting the model. It's about recognizing that you're the one who holds the system-level view, and acting like it.
One practical place to start
If any of this resonates, the most useful thing you can do today isn't switching tools or rewriting prompts. It's starting a PROJECT_CONTEXT.md file in your repo.
In it: write what your project does, what each major module is responsible for, and what the rules of the road are (error handling conventions, naming patterns, what should never happen in this layer).
Then, at the start of every Claude session that touches existing code, paste the relevant section.
You'll notice the difference within one or two sessions.
I put together a free pack around this kind of workflow structure — starter templates, a shipping checklist, and some prompt scaffolding for first-time Claude builders. It's free, no email required, no upsell. If you're working through this problem on a solo project, it might save you a few frustrating weeks: Ship With Claude — Starter Pack
If you've figured out other ways to handle the continuity problem with Claude sessions, I'd genuinely like to hear them. This is one of those things where everyone seems to be figuring it out independently and not sharing much.
Top comments (0)