DEV Community

Hussein Mohamed
Hussein Mohamed

Posted on

Why Every AI Tool You Use Forgets You (And How I Fixed It)

Open Claude Desktop. Explain your preferences. Switch to Cursor. Explain them again. Open a new chat tomorrow. Start from zero. Again.

Every AI tool today treats you like a stranger. Not because the models can't remember - because nothing connects them. There's no shared memory, no persistent identity, no accumulated context. Each session is a blank slate.

I kept running into this while using AI coding assistants and desktop clients across my workflow. The tools are powerful individually, but collectively they have amnesia. The same questions, the same corrections, the same context-setting ritual, every single time.

So I built GroundMemory.

What it is

GroundMemory is a persistent memory layer for AI agents. It runs as an MCP server (or Python library) that any MCP-compatible tool can connect to - Claude Desktop, Cursor, Cline, Windsurf, Claude Code, Open WebUI, n8n, and more.

Once connected, your agent has:

  • Long-term memory (MEMORY.md) - facts, preferences, decisions that survive forever
  • A user profile (USER.md) - who you are, how you work, what you care about
  • Agent instructions (AGENTS.md) - how the AI should behave with you
  • An entity relationship graph (RELATIONS.md) - people, projects, teams, and how they connect
  • Daily logs (daily/YYYY-MM-DD.md) - session-by-session context

At every session start, all of this is injected into the agent's context. Between sessions, the agent can search across everything using hybrid BM25 + vector search.

The key insight: one identity, every tool

The part that changed things for me wasn't just "memory for one assistant." It was connecting multiple tools to the same GroundMemory server.

Claude Desktop knows my communication preferences. Cursor knows my tech stack and architectural decisions. Cline knows the tasks I'm working on this week. And they all share that knowledge because they all read from and write to the same workspace.

You stop being a stranger every time you open a different tool.

Zero-setup, local-first

GroundMemory runs entirely on SQLite with FTS5 in its default mode. No API key, no GPU, no cloud service, no external dependencies. Install and run:

pip install groundmemory && groundmemory-mcp
Enter fullscreen mode Exit fullscreen mode

Connect any MCP client to http://127.0.0.1:4242/mcp and you're done.

Want vector search? Plug in any OpenAI-compatible embedding endpoint (OpenAI, Ollama, LM Studio) or use local sentence-transformers. The BM25-only mode always works as a fallback.

Everything is Markdown

All memory is stored as human-readable Markdown files in a local directory. You can browse them, edit them manually, version them with Git. Nothing is opaque. If the agent writes something wrong, you open the file and fix it.

How it compares

I looked at every major memory solution in this space before building this. Mem0, Letta, memsearch, Zep - they all solve parts of the problem, but none of them hit the combination I needed:

  • Zero-setup (no API key to get started)
  • Local-first and offline-capable
  • Human-readable Markdown storage
  • Structured memory tiers (not a flat semantic store)
  • MCP-native (not SDK-first with MCP bolted on)
  • Framework-agnostic (just infrastructure, not a framework)

There's a detailed comparison table in the repo README.

Try it

It's MIT licensed, has 449 tests, and the full docs cover everything from embedding providers to workspace isolation to the Python API.

I'd genuinely love feedback - especially from people using multiple MCP tools in their workflow. What would you want your AI to remember?

Top comments (0)