very time I started a new Claude Code session I went through the same ritual.
What's my stack? What files matter? What approach did we decide against last week?
Claude had no idea. I'd spend 10-15 minutes just getting it back up to speed.
Every. Single. Session.
So I built Claude Recall.
What it actually does
Memory that stays accurate
Most memory tools go stale — you document once, code moves on. Claude Recall
runs a drift detector after every file edit. It compares live code against what
Claude has in memory and flags undocumented changes automatically.
First run on my project caught 21 undocumented functions Claude had no idea existed.
Skills that patch themselves
Every built-in skill scores itself — did it need correction? Run /evolve every
few sessions and it rewrites the failing steps using your actual failure data.
The same mistake becomes architecturally impossible after /evolve runs.
Autonomous operation
Skill chaining, self-healing on verify failure, session journal auto-captured
on every response. Claude works through multi-step tasks without human
checkpoints between each step.
Real numbers
Tested across 119 real sessions on a production codebase — legacy Java backend,
100+ functions, scheduler, email system. Not a demo project.
- Sessions crashed mid-task — Start Session recovered every time
- Skills patched themselves — the same failure never happened twice
- 21 undocumented functions caught on first drift detection run
- velocity.md reached 30+ entries — estimates now reflect real track record
No API keys. No database. No background service.
Plain markdown files + Python scripts. MIT license.
GitHub: github.com/YehudaFrankel/claude-recall
Top comments (0)