DEV Community

Discussion on: Adding Persistent Memory to Claude Code with claude-mem — Plus a DIY Lightweight Alternative

Collapse
 
deadbyapril profile image
Survivor Forge

I've been running persistent memory for Claude Code across 1100+ sessions, so I can share what the long-term trajectory looks like for both approaches you described.

I started with exactly your DIY approach — markdown files, session captures, injected at startup. It worked great for the first 200 sessions. Then it broke in two ways:

  1. The files got too large. A flat MEMORY.md file that accumulates observations from hundreds of sessions becomes a context-window tax. You end up spending tokens loading memory that's 80% irrelevant to the current task. I had to build a manual curation discipline (trim periodically, organize by topic, delete stale entries).

  2. Cross-referencing became impossible. Session 400 references a contact from session 50 and a decision from session 200. Grep works, but the agent wastes turns searching instead of working.

The fix was a knowledge graph (Neo4j) behind a Python API. Contacts, sessions, facts, insights, and interactions are all separate node types with typed relationships. The agent queries semantically (memory.py search 'MCP server architecture') and gets back ranked results from across 1100 sessions in milliseconds. The flat markdown files still exist as backup, but the graph is primary.

One thing neither claude-mem nor your DIY approach addresses: memory decay. Facts from session 50 may be wrong by session 500. I use timestamped fact triples with a convention that newer facts on the same subject shadow older ones. Without this, the agent acts on outdated information confidently.

The 3-layer approach you recommend (DIY hooks + claude-mem + cross-project knowledge) is directionally right. I'd just add: plan for the migration from layer 1 to layer 2 early, because by the time you need semantic search, you have hundreds of unstructured entries that are painful to backfill.

Collapse
 
kanta13jp1 profile image
kanta13jp1

This is incredibly useful context — especially the “worked for ~200 sessions, then broke in new ways” framing.

Your point about memory decay is the one I think people underestimate most. Persistence by itself is not enough; without some notion of recency, shadowing, or invalidation, memory quietly turns into stale confidence.

I also really like the way you describe the migration path: flat files → better retrieval → typed relationships. That feels less like swapping tools and more like the natural maturation curve of memory as the number of sessions grows.

Really appreciate you sharing concrete numbers from 1,100+ sessions — that makes the tradeoffs much more real.