I'm Zackery, a solo dev. I got frustrated with the current state of LLM memory (mostly just dumping embeddings into a vector DB and doing a top-K semantic search). It feels like a filing cabinet, not a brain.
I built Mnemosyne as a local, associative memory backend that plugs directly into Claude Desktop, Cursor, and Windsurf via the Model Context Protocol (MCP).
Instead of standard RAG, it uses a SQLite graph with spreading activation and Hebbian decay.
How it works:
- It uses SQLite FTS5 for the initial retrieval (BM25).
- It then performs a Breadth-First Search (BFS) across a localized graph of edges to spread activation energy to related concepts.
- Memories that are frequently co-retrieved form stronger edges (LTP).
- Unused trivia naturally decays over time.
It's a single, standalone binary (C#/.NET 8 AOT compiled) for Windows and Linux that runs entirely locally. Zero cloud dependencies. Your data never leaves your machine.
I'm charging a one-time $29 for early access to fund further development (I want to add direct Git repo ingestion next).
Would love to hear your thoughts on Hebbian memory models vs standard vector search, or any feedback on the implementation!
Happy to answer any questions about the architecture.
Top comments (0)