Most AI memory systems treat every fact equally, forever. That felt wrong to me.
If you tell Claude you use React, then six weeks later say you switched to Vue both facts exist in memory with the same weight. The system has no way to know which one is current. You either manually delete the old one or the model gets confused.
That's the problem I was trying to solve.
What I built
YourMemory is an MCP memory server that applies the Ebbinghaus forgetting curve to retrieval. Memories decay over time based on importance and how often they're recalled. Frequently accessed memories stay strong. Memories you never revisit fade out and get pruned automatically.
The retrieval score is:
score = cosine_similarity × Ebbinghaus_strength
strength = importance × e^(−λ_eff × days) × (1 + recall_count × 0.2)
λ_eff = 0.16 × (1 − importance × 0.8)
So results rank by both relevance and recency not just one.
Benchmark results
I ran it against Mem0 on the LoCoMo dataset from Snap Research 200 QA pairs across 10 multi-month conversation samples.
| Metric | YourMemory | Mem0 |
|---|---|---|
| LoCoMo Recall@5 | 34% | 18% |
| Stale memory precision | 100% | 0% |
The stale memory result is the one I keep thinking about. Both systems had the same importance scores. The only difference was time. Decay handled it automatically no manual deletion needed.
How it works in practice
Three MCP tools: recall_memory, store_memory, update_memory. Add it to your Claude settings.json and it persists context across sessions.
{
"mcpServers": {
"yourmemory": {
"command": "yourmemory"
}
}
}
Claude then follows a recall → store → update workflow on every task. Memories it surfaces frequently get reinforced. Memories it never touches decay.
Stack
PostgreSQL + pgvector for vector storage
Ollama for local embeddings (nomic-embed-text) no API costs
FastAPI for the REST layer
APScheduler for the automatic 24h decay job
Where it's at
Early stage. The core decay model is solid. Setup is still a bit manual (Docker + Ollama required). Would love feedback on the decay parameters and whether this approach holds up at scale.
GitHub: https://github.com/sachitrafa/cognitive-ai-memory
Full benchmark methodology: https://github.com/sachitrafa/cognitive-ai-memory/blob/main/BENCHMARKS.md
Top comments (2)
Great
Great work!