Claude Code sessions accumulate noise. Not errors, not bugs — noise. Old tool results. Retracted approaches. Plans that were superseded three decisions ago but never explicitly closed. A file path that was correct at step 4 but was refactored at step 11.
The model doesn't forget any of it. It just weights everything more or less equally, which means when you're at step 20, the agent is still reasoning from the full history of steps 1 through 19 — including the dead ends.
This is cascading context drift. It's not a bug in Claude. It's a property of how long-context language models work. And it compounds: a slightly wrong interpretation at step 10 produces a slightly wrong output at step 11, which produces a more wrong interpretation at step 12. By step 20, the agent is solving a subtly different problem than you started with.
We've been running a 13-agent autonomous system for six weeks. Context drift is the failure mode we hit most often — more than bugs, more than API errors, more than rate limits. Here's what we do about it.
What drift looks like in practice
Concrete example: an agent is refactoring an authentication module. It reads auth.ts, plans the changes, starts implementing. Midway through, it discovers a dependency in middleware.ts it didn't account for. It reads that file, makes a note in its output, continues.
By the time it's writing the final implementation, the context contains:
- The original file reads (some of which are now stale — the agent has been modifying the files)
- The original plan (partially superseded)
- The middleware discovery (floating as a note in the conversation, not anchored to any decision)
- Several tool call results from exploratory work that wasn't relevant
The agent writes the final implementation. It's 90% correct. The 10% that's wrong traces back to either stale state ("the file was like this when I read it earlier") or an undocumented decision that got buried ("I decided to handle the middleware separately, but never recorded why or where").
The failure isn't dramatic. It's a subtle wrong that requires a human to read the output carefully to catch.
The fix: a working reference file
The solution is to drop a working reference — a compact, accurate snapshot of what is true right now — at any point where the session is getting long, you're switching tasks, or you're about to hand off to another agent.
We call this a context anchor. The format:
# Context Anchor — 2026-04-26T14:30:00Z
## What's true right now
- JWT middleware added to app.ts:23, replacing the session-based approach
- Decision: JWT because no server-state requirement; sessions would need Redis
- Tried: session-based middleware first — abandoned because no Redis in stack
## The working reference
> Auth refactor is complete at app.ts:23. Remaining work: update the three
> route handlers in routes/api.ts that still import the old session middleware.
## Next action
- [ ] Update routes/api.ts:47, 89, 134 — remove old session middleware import
Three sections. What's true, the single-sentence working reference, the exact next step.
The discipline of writing this forces two things that free-form context doesn't: explicit decisions with their reasons, and an honest accounting of what's actually true now versus what was true ten steps ago.
When to write one
Before switching tasks. If you pause the auth refactor to check on something else and plan to come back, drop an anchor first. When you return, the anchor is the re-entry point — the agent reads it before anything else and gets the correct current state rather than re-deriving it from a long conversation.
Before handing off to another agent. This is the multi-agent equivalent. The receiving agent reads the anchor as its working context. Without it, the handoff context is the full conversation — expensive to process and full of noise. With it, the receiver has a 10-line accurate state summary.
When you notice re-reading earlier messages. This is the tell. If you (or your agent) is scrolling back through the conversation to remember what was decided, that's drift. The anchor replaces the scroll.
At session boundaries. When you start a new Claude Code session, the previous session's context is gone. An anchor at the end of each session is what survives — it's the thread that connects sessions.
How to use it in Claude Code
The easiest implementation: add a /anchor skill to your CLAUDE.md that triggers this behavior:
## /anchor
When you run /anchor:
1. Scan the current session for: what was built, decisions made with reasons,
what was tried and ruled out, the next concrete action.
2. Write to .claude/anchor.md in this format:
- ## What's true right now (bullet list)
- ## The working reference (one sentence)
- ## Next action (exact file:line if applicable)
3. Show me the anchor before writing it.
The agent produces the anchor. You verify it's accurate (this step matters — the anchor is only useful if it's correct). It gets written to .claude/anchor.md, which you check into version control.
On the next session, the first step is always: read .claude/anchor.md before anything else.
The deeper principle
Context anchors work because they force explicit state management. Most agent sessions don't have this — state is implicit, distributed across the conversation, and increasingly stale as the session progresses.
The anchor externalizes state. It makes what's true now explicit and writable, separating it from the history of how you got here.
This is the same principle behind good commit messages, good PR descriptions, and good technical documentation: the history is in git, but the current truth should be stated explicitly where the reader starts.
Your agent doesn't need more context. It needs better context — accurate, compact, and up to date.
Context drift is a compound problem — it doesn't announce itself, it just makes each successive step slightly less reliable than the last. The anchor is cheap to write and gets cheaper as you build the habit. The sessions where you skip it are usually the ones you end up re-running.
If context drift is slowing you down: the context-anchor skill that fixes this is free — grab it at whoffagents.com/free-skill. One email, no spam.
If you're rebuilding auth, Stripe webhooks, or CI config from scratch on every project, the Ship Fast Skill Pack packages 11 battle-tested Claude Code skills for $49 one-time — including context-anchor plus auth-patterns, stripe-payments, testing-suite, deploy-config, and more.
Top comments (0)