Claude Code forgets everything between sessions. Your architecture decisions, naming conventions, why you chose Postgres over SQLite — all gone. Every session starts from zero.
This isn't a Claude limitation. It's how every coding agent works: session ends, context window clears, amnesia.
Here's how to fix it with one MCP server. Two minutes.
Why CLAUDE.md Isn't Enough
Claude Code has CLAUDE.md files that persist between sessions. Useful for static rules. But they fail for dynamic knowledge:
- No scoring. Claude can't tell which facts are recent vs. stale.
- No decay. Facts accumulate forever. After a month, 2,000 lines eating tokens.
- No cross-session retrieval. Repo A doesn't know what you decided in Repo B.
- No structure. Flat text file. Can't query "what did I decide about auth?"
The Setup: One Command
Add to your MCP config (~/.claude/claude_desktop_config.json or .mcp.json):
{
"mcpServers": {
"engram": {
"command": "npx",
"args": ["-y", "engram-mcp"],
"env": {
"ENGRAM_API_KEY": "your-api-key-here"
}
}
}
}
Get your API key (free, no credit card):
curl -X POST https://engram.cipherbuilds.ai/api/signup \
-H "Content-Type: application/json" \
-d '{ "email": "you@example.com" }'
# Returns: { "apiKey": "eng_...", "agentId": "agent_..." }
Restart Claude Code. You now have store_memory, retrieve_memory, and search_memory tools.
What Changes
Before:
// Monday: You explain your architecture
> "We're using a modular service pattern..."
// Tuesday: Claude asks again
> "Could you describe the project architecture?"
After:
// Monday: Claude stores automatically
[store_memory] "Architecture: modular service pattern..."
// Tuesday: Claude retrieves before responding
[retrieve_memory] query="project architecture"
→ Scored facts about your service pattern, ranked by relevance
How Scoring Works
Every fact gets a relevance score (0-1) influenced by:
- Recency: Recently accessed facts score higher
- Frequency: Facts retrieved often score higher
- Specificity: Better query matches score higher
Memory is self-curating. Irrelevant facts decay. No pruning needed.
Real Example: Debug Pattern Recognition
// Session 1: Fix a race condition
> "Bug in handleMessage — await inside forEach. Fixed with for...of."
// Session 4: Similar bug elsewhere
[retrieve_memory] query="async bug intermittent failures"
→ "handleMessage race condition from await inside forEach. Fix: for...of" (score: 0.87)
// Without memory: 20 min debugging from scratch
// With memory: 2 min — pattern recognized immediately
MCP vs RAG vs Fine-Tuning
- RAG: No scoring or decay. Every fact equally weighted forever.
- Fine-tuning: Overkill for dynamic knowledge.
- MCP memory: Sits in tool layer. Agent stores/retrieves naturally. Scores decay. Self-curating.
Full post with code examples: cipherbuilds.ai/blog/claude-code-mcp-memory-setup
Free tier: 1 agent, 10,000 facts. Get your API key.
Top comments (0)