DEV Community

decker
decker

Posted on

Claude Code Compaction Keeps Destroying My Work — Here's My Fix

Last Tuesday I spent 45 minutes walking Claude Code through a nasty race condition in our WebSocket handler. We'd traced it through three files, identified the root cause, built a mental model of the fix together. I was right there.

Then I saw it: [auto-compacting conversation...]

When it came back, Claude had no idea what a WebSocket was. Not in our context, anyway. The entire debugging chain — gone. I had to start from scratch, re-explaining the architecture, re-tracing the bug, re-building all that shared understanding.

This has happened to me more times than I can count.

The Compaction Problem Nobody Talks About

Claude Code has a context window. When your conversation gets long enough, it runs compaction — either automatically or when you type /compact. The idea is simple: summarize the old stuff, keep the important bits, free up space.

In practice? It's a lossy compression algorithm running on your entire debugging session. And it has no idea what matters to you.

Here's what compaction consistently throws away in my experience:

  • The reasoning chain, not just the conclusion. Claude remembers "we decided to use approach B" but forgets why approach A failed. So when approach B hits a snag, you're back to square one.
  • Architecture decisions made mid-session. You spent 20 minutes discussing the tradeoffs of your data model. After compaction, Claude suggests the exact pattern you both rejected.
  • File relationships and context. Claude knew that auth.ts calls session.ts which triggers events.ts. Post-compaction, it treats each file like an island.

The worst part is the timing. Compaction kicks in when your conversation is long — which means you're deep into something complex. You lose context exactly when context matters most.

My Breaking Point

The WebSocket incident was bad. But the one that actually made me go looking for a solution was worse.

I was refactoring our payment processing pipeline. Claude and I had mapped out the entire flow across 8 files, identified which functions were safe to change and which had hidden side effects, and we'd already successfully refactored 3 of the 8 files.

Compaction hit. Claude lost the map. It started suggesting changes to functions we'd explicitly marked as dangerous. I caught it before anything broke, but only because I happened to remember the conversation from 30 minutes ago.

What if I hadn't caught it? What if I'd trusted the AI that had just told me those functions were safe to touch, not realizing it no longer remembered telling me the opposite?

That's the real danger. It's not just lost productivity. It's that compaction creates a confident AI that has forgotten its own warnings.

What Actually Fixed This For Me

I found Mantra, and it solved the problem in a way I didn't expect.

Mantra takes automatic snapshots of your Claude Code sessions. Not summaries — actual snapshots of the full conversation state. When compaction nukes your context, you can restore a previous state and pick up where you left off.

Install is one line:

curl -fsSL https://mantra.gonewx.com/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

What changed for me:

  • Before starting a complex debugging session, I know Mantra is already capturing snapshots in the background
  • When compaction hits and Claude forgets something important, I restore from a snapshot taken before the compaction
  • I can go back to any point in a session, not just the most recent state

The payment pipeline refactoring I mentioned? I recreated a similar scenario after installing Mantra. Compaction still happened, but I just rolled back to my pre-compaction snapshot. All the file mappings, all the "don't touch this function" warnings, everything was intact.

The Uncomfortable Truth

Claude Code is genuinely good at what it does. But context window limits are a hard constraint, and compaction is a blunt tool for managing it. Until context windows are large enough that compaction becomes irrelevant (and we're not there yet), you need a safety net.

I wasted hours — probably days, total — re-explaining things to an AI that had already understood them. That's the kind of problem that's easy to dismiss as "just part of the workflow" until you experience what it's like when it's actually solved.

Mantra is open source. Releases are on GitHub. If you've ever watched compaction eat your debugging session, give it a shot.


Have you run into compaction wiping out important context? I'm curious what workarounds other people have found. Drop a comment.

Top comments (0)