DEV Community

Cover image for Fixing Claude Code's Amnesia
Arpit
Arpit

Posted on

Fixing Claude Code's Amnesia

i_dont_remember

Picture this: You're deep in a refactor. Claude just helped you understand the token validation in auth.ts .

You're vibing. You feel productive.

Then you ask: "Cool, now how does this connect to cache_service.ts ?"

And Claude… re-reads auth.ts . From line 1 again! Like it's never seen it before.

You realize you're not having a conversation. You're giving tours to a very intelligent tourist with amnesia.

Every message is a fresh start. Every file read is from scratch. Every time you context-switch, Claude forgets what you were just talking about. It's brilliant at reasoning, terrible at remembering.

And here's the kicker: This isn't a bug. It's just how LLMs work. Stateless by design. Each API call is independent. No memory. No continuity. And No "yo, we looked at that file 5 minutes ago."

Enter: Super Claude Kit 🎭

This frustration led me to build a coding agent-Orpheus CLI, an experimental coding agent to understand how these tools work under the hood.

Super Claude Kit is what happened when I got annoyed enough to fix the amnesia problem myself.

It's a lightweight orchestration system that gives Claude Code:

  • Persistent memory across messages and sessions
  • Dependency intelligence for your entire codebase
  • Smart context management that doesn't waste tokens
  • Zero-config operation (it just works 🥷🏻)

snowball_dropping

Built entirely with bash hooks (thank you, Anthropic 🙏) and a couple of lightweight Go binaries, it turns Claude from a stateless assistant into a workspace-aware development partner.

Here's the best part: It's all local. No databases, no API calls, no cloud services.

Anthropic's Claude Code and that beautiful hook system

Let me give credit where it's due: Claude Code's hook system is genius. Anthropic could've built a closed system. Instead, they gave us lifecycle hooks.

claude_code_with_hooks_triggered

Super Claude Kit is proof that Claude Code isn't just an AI assistant, it's a platform. The hooks let me inject context, track state, route intentions, and build memory without disturbing Claude Code's internals.

How it actually works (the fun part)

I ran the same task in vanilla Claude Code vs Claude Code with Super Claude Kit

**Task:** "Check for circular dependencies and identify dead code"
**Vanilla Claude Code (left):**
 - Spawned 2 Explore sub-agents
 - 60 tool uses scanning files
 - **55,400 tokens consumed**
 - **2 minutes 21 seconds**
 - Still processing when I stopped it
**Super Claude Kit (right):**
 - Used specialized tools: find-circular, find-dead-code
 - 2 direct tool calls
 - **<1,000 tokens**
 - **5 seconds**
 - Complete, accurate results
**That's 99% token reduction and 30x faster.**
Enter fullscreen mode Exit fullscreen mode

comparison_with_super_claude_kit
This is the same query, same codebase, same computer, same model (Opus 4.5).

Zero redundant reads. Instant dependency analysis. Cross-message awareness.

It just… remembers. Like a teammate would.

Super Claude Kit works like a tiny orchestration system inside Claude Code.

The difference? Super Claude Kit guides Claude to use the right tools automatically.

Component Breakdown

component_diagram

Each layer responds to hooks in milliseconds. The whole system adds <100ms overhead. You don't even notice it running.

1. Persistent Memory: Because Forgetting Sucks
Every session gets a human-readable journal:

## Session: 2025–11–15 14:30 UTC
[pattern] Auth uses JWT with Redis-backed sessions
[architecture] Microservices talk via gRPC
[bug] Race condition in token refresh (auth.ts:142)
Files Accessed:
• src/auth/auth.ts (read, edit)
• src/database/redis.ts (read)
Enter fullscreen mode Exit fullscreen mode

When you start a new session, if it's been less than 24 hours, this state auto-restores. You pick up exactly where you left off.

2. TOON Format: Because JSON is Bloated
While building this, I stumbled on TOON (Token-Oriented Object Notation), a format designed specifically for LLMs.
JSON (everyone's default): 95 tokens of mostly syntax noise

{
 "git": {"branch": "epic/labs", "head": "38f6f42e", "dirty": 22},
 "files": [{"path": "CLAUDE.md", "action": "read", "age": 120}]
}
Enter fullscreen mode Exit fullscreen mode

TOON : 45 tokens (52% smaller!)

GIT{branch,head,dirty}:
 epic/labs,38f6f42e,22
FILES{path,action,age}:
 CLAUDE.md,read,120
Enter fullscreen mode Exit fullscreen mode

That's not just clever, it's practical. Every token saved is more room for actual code context.

The Context Capsule holds everything Claude needs:

  • Git status (branch, dirty files, HEAD commit)
  • Recently accessed files
  • Current tasks
  • Discoveries
  • Session metadata

A hook updates this before each prompt, but only if something actually changed (hash-based diffing). That optimization alone cuts updates by 40–50%.

3. Dependency Graph: Know Your Codebase

On session start, a Go binary (using tree-sitter for parsing) builds a complete dependency graph. 
Current languages supported: TypeScript, JavaScript, Python, Go

What it does:

  1. Scans your entire codebase
  2. Extracts imports/exports from every file
  3. Builds a directed graph of dependencies
  4. Detects and finds all strongly connected components/references, not just individual cycles (using Tarjan's algorithm)
  5. Identifies dead code (files nobody imports)
  6. Saves for future reference in the session

Now Claude can answer:

  • Who imports this file? 
  • What breaks if I change this? 
  • Are there circular dependencies?
  • What files are unused?

dependecy_tree

Before refactoring, claude knows what we're about to break. No surprises.

*Why Tarjan's algorithm? *
Because it's what npm, cargo, and every serious dependency analyzer uses. It finds all circular dependencies in a single O(V + E) pass.
Simple DFS would need multiple passes. Floyd-Warshall would be O(V³).
Tarjan's is just… the right tool.

working_algorithm

4. Progressive Reader: Semantic Chunking for Large Files
Large files (>50KB) kill context windows. Traditional line-based reading breaks functions mid-definition.
 
Progressive Reader uses tree-sitter to chunk files at semantic boundaries:

// auth.service.ts (300 lines)
// Chunk 0: Imports + AuthService class (lines 1–85)
import { User } from './user';
export class AuthService {
 // Full class definition
}
// Chunk 1: TokenService class (lines 86–150)
export class TokenService {
 // Complete class
}
// Chunk 2: SessionService (lines 151–220)
export class SessionService {
 // Complete class
}
Enter fullscreen mode Exit fullscreen mode

You read chunk 0. That's enough? Done. Need more? Read chunk 1. It's like pagination, but smart.

Benefits:

  • Read only what you need (60–80% context savings)
  • Preserve semantic coherence (no broken functions)
  • Continuation tokens for stateful reading
  • Tree-sitter guarantees correct boundaries

5. The "Boss Layer": Making Claude Use Its Brain
Even with memory and context, Claude sometimes ignored the capsule.
So I added forcing functions:

boss_layer

Smart Refresh (Don't Update If Nothing Changed)

current_hash=$(compute_state_hash)
if [[ "$current_hash" == "$last_hash" ]] && [[ $minutes < 5 ]]; then
 exit 0 # Skip - nothing changed
fi
Enter fullscreen mode Exit fullscreen mode

Result: 60–70% fewer capsule updates.

Intent Router (Delegate to Specialists)
Hook analyzes prompts and routes to specialized sub-agents:

"Explore database architecture" → database-navigator
"Debug this MCP integration" → agent-developer
"Create a GitHub issue" → github-issue-tracker
Enter fullscreen mode Exit fullscreen mode

Claude stops trying to do everything and delegates intelligently.

Decision Matrix (SKILL.md)

A rule table that enforces best practices:
BEFORE refactoring → Run impact-analysis
BEFORE deleting → Run query-deps
WHEN imports fail → Run find-circular

Think of it as a "boss" that reminds Claude to use its tools.

| Metric                           | Before   | After  | Improvement 
|----------------------------------|----------|--------|-------------
| **Tokens per context update**    | ~8,500   | ~4,900 | **42% reduction**
| **Capsule refreshes per session**| ~12      | ~4     | **67% fewer** 
| **Files re-read unnecessarily**  | ~15      | ~5     | **67% reduction** 
| **Context loss between sessions**| 100%     | 0%     | **Total persistence**
| **Dependency analysis time**     | 5-10 min | <5 sec | **~100x faster** 
| **Disk footprint**               | N/A      | 10-30KB| Negligible 
| **Hook overhead**                | N/A      | <50 ms | Imperceptible
Enter fullscreen mode Exit fullscreen mode

A Complete Session (How It All Connects)

architectural_flow

1. $ claude
2. SessionStart hook fires
 ├─ Check for previous session (<24h ago)
 ├─ Restore state from capsule_persist.json
 ├─ Build dependency graph (1,247 files, 4.2s)
 └─ Save to .claude/dep-graph.toon
3. Claude receives context injection:
   "RESTORING FROM PREVIOUS SESSION
   Last: 28m ago | Task: Auth refactor → Redis"
4. You: "Continue the refactor"
5. UserPromptSubmit hook fires
 ├─ Hash current state
 ├─ Compare to last refresh
 ├─ No changes → Skip update (saves ~4,500 tokens)
 └─ Proceed
6. Claude: "From the capsule, I see we're migrating auth.ts
   to Redis. Last session we updated token validation.
   Let me now update the login endpoint…"
7. You: "What breaks if I change token format?"
8. Claude runs: impact-analysis.sh auth.ts
   Output: "15 dependents. HIGH RISK."
9. You: "Create a GitHub issue for this"
10. Intent router: → github-issue-tracker sub-agent
    Issue created with full context from capsule
11. You: exit
12. SessionEnd hook fires
 ├─ Save state → capsule_persist.json
 ├─ Append discoveries → exploration journal
 └─ Set 24h expiry
Enter fullscreen mode Exit fullscreen mode

Next session: Everything restores automatically. You pick up where you left off.

AI coding assistants are incredible. But they're tourists in your codebase. They parachute in, help with a problem, then forget everything.

Super Claude Kit turns tourists into residents.

It's not about changing the model. It's about giving the model a home, a persistent workspace where context lives, dependencies are mapped, and sessions connect.

Claude stops forgetting. You stop repeating yourself. Development becomes a conversation, not isolated transactions.

If you've ever been frustrated explaining the same thing to Claude for the third time in an hour, this is for you.

Try It (It's Open Source)

Super Claude Kit works with Claude Code out of the box.

GitHub logo arpitnath / super-claude-kit

Transform Claude Code from stateless to stateful. Persistent context memory system with cross-session persistence, token-efficient storage, and zero dependencies.

Super Claude Kit

Super Claude Kit

curl -fsSL https://raw.githubusercontent.com/arpitnath/super-claude-kit/master/install | bash

Super Claude Kit adds persistent memory to Claude Code

A persistence layer for Claude Code.
Files, tasks, discoveries — all restored instantly.

License: MIT Claude Code Version


Quickstart

Installing Super Claude Kit

Run the one-line installer:

curl -fsSL https://raw.githubusercontent.com/arpitnath/super-claude-kit/master/install | bash
Enter fullscreen mode Exit fullscreen mode

That's it! Restart Claude Code and you'll see the context capsule on every session.

Manual installation (advanced)
# Clone the repository
git clone https://github.com/arpitnath/super-claude-kit.git
cd super-claude-kit

# Run the installer
bash install
Enter fullscreen mode Exit fullscreen mode

The installer will:

  • Install hooks to .claude/hooks/
  • Build Go tools (dependency-scanner, progressive-reader)
  • Configure ~/.claude/settings.local.json
  • Auto-install Go 1.23+ if not present

What you get immediately

Session Resume

After installation, Claude Code will:

  • 🧠 Remember files you've accessed (no re-reads)
  • 📦 Restore context between sessions (up to 24 hours)
  • Track tasks across restarts
  • 🔍 Log discoveries as you work
  • 🔗 Understand dependencies in your codebase

How it works

Super Claude Kit uses hooks (SessionStart, UserPromptSubmit) to:

  1. Capture




Top comments (0)