MnemeCache is an open-source distributed in-memory cache written from scratch in Rust. It is not a Redis wrapper or drop-in replacement — it's a ground-up rethink of how a modern cache should be built: separation of hot memory and persistence, mTLS security by default, and Raft-based HA without the complexity tax.
🐙 GitHub: github.com/mneme-labs/mneme
🐳 Docker Hub: hub.docker.com/r/mnemelabs
Why not just use Redis?
Redis is a single-process C daemon with 25 years of accumulated complexity. Persistence is bolted on (RDB snapshots or AOF logging). TLS is optional and cumbersome to configure in clusters. HA requires Sentinel — a separate fleet of processes with its own failure modes.
MnemeCache is designed so that persistence, security, and HA are architectural defaults, not add-ons.
What makes it different
- WAL + Keeper nodes — Core never touches disk. Writes stream over mTLS to dedicated Keeper processes that own all disk I/O and run snapshot compaction independently.
- Raft consensus — 3-node HA cluster with automatic leader election and sub-second failover, no Sentinel needed.
- Read replicas — horizontal read scaling with eventual consistency on a dedicated port.
- mTLS by default — Core generates a CA on first boot and shares it with the cluster automatically. Zero manual certificate management for Docker deployments.
- Built in Rust — no GC pauses, memory-safe, async I/O via Tokio.
- Prometheus metrics on every node, Grafana dashboard included.
Four node types, four Docker images
MnemeCache separates concerns into distinct roles. Each ships as its own image:
| Image tag | Role |
|---|---|
mnemelabs/core |
Cluster primary — hot store, WAL, Raft |
mnemelabs/keeper |
Persistence layer — WAL drain, snapshots |
mnemelabs/cli |
CLI |
Up in 60 seconds
docker run -d \
-p 6379:6379 -p 9090:9090 \
-e MNEME_ADMIN_PASSWORD=secret \
-v mneme-data:/var/lib/mneme \
mnemelabs/core:latest
mneme-cli --insecure -u admin -p secret ping
# PONG
mneme-cli --insecure -u admin -p secret set hello world
mneme-cli --insecure -u admin -p secret get hello
# world
Full cluster with Docker Compose
git clone https://github.com/mneme-labs/mneme.git
cd mneme
# Core + 2 Keepers
docker compose --profile cluster up -d
# Core + 3 Keepers + 2 Replicas + Prometheus + Grafana
docker compose --profile full up -d
# 3-node Raft HA cluster
docker compose --profile ha up -d
How the WAL pipeline works
Core holds everything in memory and streams every committed write to Keeper nodes over mTLS. Keepers are the only processes that ever touch disk — they drain WAL segments, write them to cold storage, and compact them into snapshots on a configurable schedule.
When Core restarts it asks a Keeper: "what's my last committed offset?" — loads the latest snapshot, replays only the delta, and opens for connections. Recovery time is bounded by snapshot interval, not dataset size.
This separation means you can scale persistence independently, run multiple Keepers for redundancy, and keep Core's write path free of disk latency.
Still early and under active development — contributions and feedback very welcome.
Top comments (0)