DEV Community

Cover image for How a SQLite WAL Fix Grew into a 54-Tool MCP Memory Stack
Ruslan Manov
Ruslan Manov

Posted on

How a SQLite WAL Fix Grew into a 54-Tool MCP Memory Stack

**

How a SQLite WAL Fix Grew into a 54-Tool MCP Memory Stack

**

sqlite-memory-mcp started as a safer replacement for JSONL-based MCP memory. At v3.4.0 it is a 54-tool SQLite stack with tasks, bridge sync, collaboration, public-knowledge workflows, and optional hybrid search

**


How a SQLite WAL Fix Grew into a 54-Tool MCP Memory Stack

sqlite-memory-mcp started with a narrow goal: stop local memory corruption when
multiple Claude Code sessions touch the same store.

The official memory-server pattern is simple, but a flat file becomes fragile as
soon as more than one process writes to it. I wanted the same local-first feel,
but with transactional storage, better search, and room to grow.

SQLite was the obvious starting point.

By v3.4.0, that starting point has grown into a broader stack:

  • 54 MCP tools
  • 7 focused servers plus an optional unified server
  • SQLite WAL for concurrent local access
  • FTS5 BM25 search, with optional semantic fusion through sqlite-vec
  • session recall and project search
  • structured task management
  • git-based bridge sync
  • collaborator and public-knowledge workflows
  • entity linking and context/intelligence tools
  • an optional PyQt6 task tray app

Why SQLite was the right base layer

For this kind of workflow, SQLite buys a lot:

  • one local database file
  • no daemon to run
  • no cloud dependency
  • ACID transactions
  • WAL mode for concurrent readers and writers
  • FTS5 in standard SQLite

The point was never "SQLite beats every database".

The point was: for a local MCP memory stack that lives next to Claude Code,
SQLite gives you reliability, search, and portability without introducing more
infrastructure than the problem needs.

The release progression in plain English

Here is the shortest accurate summary of how the project evolved.

v0.1.0: replace JSONL with SQLite WAL

The first release shipped 12 tools in one server:

  • the 9 core memory tools from the official MCP server
  • session_save
  • session_recall
  • search_by_project

This made the project useful immediately: same core memory workflow, but backed
by SQLite with WAL and FTS5.

v0.2.0: move memory between machines with git

The next release added bridge sync:

  • bridge_push
  • bridge_pull
  • bridge_status

That was the first step from "single-machine memory" toward "local-first memory
that can travel".

v0.3.0 and v0.4.0: tasks and desktop workflow

v0.3.0 added task management and HTML kanban reporting.

v0.4.0 added the PyQt6 task tray app and utility scripts around the same SQLite
database.

At that point the project was no longer just a memory backend. It became a
practical daily workflow tool.

v0.6.0 through v0.9.0: collaboration and public knowledge

The next wave added:

  • collaborator management
  • queued knowledge sharing
  • review flows for imported knowledge
  • public-knowledge publishing requests
  • ratings and verification metadata

One detail worth stating carefully: these workflows are review-oriented. The
useful part is not "viral sharing". The useful part is that shared knowledge can
be staged, inspected, and accepted deliberately.

v3.0.0: intelligence-layer expansion

v3.0.0 was the large historical expansion point.

It introduced the context/intelligence layer on top of the existing memory,
tasks, bridge, and collaboration features:

  • task/entity linking
  • context assessment and resume flows
  • candidate-claim extraction and promotion
  • context-pack building
  • impact explanation

Important footnote: v3.0.0 shipped 49 tools in one monolithic server. The later
54-tool split-server layout came after that.

v3.1.x to v3.4.0: split architecture, hybrid search, and hardening

The current line added or stabilized several things:

  • split into focused MCP servers to make tool exposure more manageable
  • optional unified server for people who want one process
  • optional hybrid search with sqlite-vec
  • recurring task support and more task/context integration
  • security and hardening fixes across bridge, collaboration, schema, and search

What the current architecture actually looks like

At v3.4.0 the project exposes 50 tools across these servers:

  • sqlite_memory — core 9 memory tools
  • sqlite_tasks — task CRUD and task workflow tools
  • sqlite_session — session recall and context-health tools
  • sqlite_bridge — bridge sync and shared-task flows
  • sqlite_collab — collaborator and public-knowledge tools
  • sqlite_entity — task/entity linking and entity-maintenance helpers
  • sqlite_intel — context and intelligence tools
  • sqlite_unified — optional all-in-one server

That split matters because the project outgrew the original one-file server.

What changed in the latest hardening cycle

The recent v3.3.x line is not about flashy new marketing bullets. It is about
making the stack safer and more predictable.

Those tags include fixes for:

  • an FTS5 injection issue
  • path traversal risks in bridge/runtime paths
  • collaborator trust-boundary hardening
  • additional schema indexes
  • read_graph performance issues
  • bridge logging and TaskDB SQL cleanup

That is the right kind of work for a project in this stage.

What I think is actually interesting here

The interesting part is not the raw tool count.

The interesting part is that a local-first SQLite database can sit underneath a
surprisingly broad MCP workflow without giving up the properties that made it
useful in the first place:

  • easy backup
  • easy inspection
  • no service orchestration
  • no mandatory cloud hop
  • direct ownership of the data

The project is bigger now, but the center of gravity is the same:

a local SQLite file that Claude Code can use safely across repeated sessions.

If you want to try it

Current repo: https://github.com/RMANOV/sqlite-memory-mcp

Latest stable tag in the repo right now: v3.4.0

If you only want drop-in memory compatibility, start with the core server.

If you want the full stack, add the companion servers or use the unified server.

That was the original goal and it is still the point of the project: keep memory
local, durable, searchable, and useful enough to support real daily work.

Top comments (0)