AI agents forget everything between conversations. Elo Memory fixes that.
Implements EM-LLM from ICLR 2025 — automatic event detection, surprise-based encoding, and human-like memory consolidation.
- Fast: ~5ms retrieval, never reads all memories
- Smart: Bayesian surprise decides what to store
- Works with: Claude Code, OpenClaw, Codex, any MCP agent
GitHub: https://github.com/server-elo/elo-memory
pip install elo-memory
The missing memory layer for AI agents.
Top comments (1)
The Bayesian surprise-based encoding is a clever approach — it mirrors how human memory actually works (we remember unexpected events, not routine ones). The ~5ms retrieval time is impressive for episodic memory. Curious about the consolidation mechanism: does it handle conflicting memories gracefully? For example, if an agent learns a user prefers approach A early on but later switches to approach B, does the surprise signal help deprecate stale preferences? That's the main failure mode I've seen with simpler vector-store memory approaches in long-running agents.