<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: VUSAL RAHIMOV</title>
    <description>The latest articles on DEV Community by VUSAL RAHIMOV (@vusalrahimov).</description>
    <link>https://dev.to/vusalrahimov</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vusalrahimov"/>
    <language>en</language>
    <item>
      <title>I built a Redis-alternative distributed cache in Rust — with WAL persistence, mTLS, and Raft consensus</title>
      <dc:creator>VUSAL RAHIMOV</dc:creator>
      <pubDate>Thu, 26 Mar 2026 21:14:13 +0000</pubDate>
      <link>https://dev.to/vusalrahimov/i-built-a-redis-alternative-distributed-cache-in-rust-with-wal-persistence-mtls-and-raft-11a6</link>
      <guid>https://dev.to/vusalrahimov/i-built-a-redis-alternative-distributed-cache-in-rust-with-wal-persistence-mtls-and-raft-11a6</guid>
      <description>&lt;p&gt;&lt;strong&gt;MnemeCache&lt;/strong&gt; is an open-source distributed in-memory cache written from scratch in Rust. It is not a Redis wrapper or drop-in replacement — it's a ground-up rethink of how a modern cache should be built: separation of hot memory and persistence, mTLS security by default, and Raft-based HA without the complexity tax.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🐙 &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/mneme-labs/mneme" rel="noopener noreferrer"&gt;github.com/mneme-labs/mneme&lt;/a&gt;&lt;br&gt;
🐳 &lt;strong&gt;Docker Hub:&lt;/strong&gt; &lt;a href="https://hub.docker.com/r/mnemelabs" rel="noopener noreferrer"&gt;hub.docker.com/r/mnemelabs&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why not just use Redis?
&lt;/h2&gt;

&lt;p&gt;Redis is a single-process C daemon with 25 years of accumulated complexity. Persistence is bolted on (RDB snapshots or AOF logging). TLS is optional and cumbersome to configure in clusters. HA requires Sentinel — a separate fleet of processes with its own failure modes.&lt;/p&gt;

&lt;p&gt;MnemeCache is designed so that &lt;strong&gt;persistence, security, and HA are architectural defaults&lt;/strong&gt;, not add-ons.&lt;/p&gt;




&lt;h2&gt;
  
  
  What makes it different
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;WAL + Keeper nodes&lt;/strong&gt; — Core never touches disk. Writes stream over mTLS to dedicated Keeper processes that own all disk I/O and run snapshot compaction independently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Raft consensus&lt;/strong&gt; — 3-node HA cluster with automatic leader election and sub-second failover, no Sentinel needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read replicas&lt;/strong&gt; — horizontal read scaling with eventual consistency on a dedicated port.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;mTLS by default&lt;/strong&gt; — Core generates a CA on first boot and shares it with the cluster automatically. Zero manual certificate management for Docker deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built in Rust&lt;/strong&gt; — no GC pauses, memory-safe, async I/O via Tokio.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prometheus metrics&lt;/strong&gt; on every node, Grafana dashboard included.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Four node types, four Docker images
&lt;/h2&gt;

&lt;p&gt;MnemeCache separates concerns into distinct roles. Each ships as its own image:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Image tag&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mnemelabs/core&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Cluster primary — hot store, WAL, Raft&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mnemelabs/keeper&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Persistence layer — WAL drain, snapshots&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mnemelabs/cli&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CLI&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Up in 60 seconds
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 6379:6379 &lt;span class="nt"&gt;-p&lt;/span&gt; 9090:9090 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;MNEME_ADMIN_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; mneme-data:/var/lib/mneme &lt;span class="se"&gt;\&lt;/span&gt;
  mnemelabs/core:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mneme-cli &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; admin &lt;span class="nt"&gt;-p&lt;/span&gt; secret ping
&lt;span class="c"&gt;# PONG&lt;/span&gt;

mneme-cli &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; admin &lt;span class="nt"&gt;-p&lt;/span&gt; secret &lt;span class="nb"&gt;set &lt;/span&gt;hello world
mneme-cli &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; admin &lt;span class="nt"&gt;-p&lt;/span&gt; secret get hello
&lt;span class="c"&gt;# world&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Full cluster with Docker Compose
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/mneme-labs/mneme.git
&lt;span class="nb"&gt;cd &lt;/span&gt;mneme

&lt;span class="c"&gt;# Core + 2 Keepers&lt;/span&gt;
docker compose &lt;span class="nt"&gt;--profile&lt;/span&gt; cluster up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# Core + 3 Keepers + 2 Replicas + Prometheus + Grafana&lt;/span&gt;
docker compose &lt;span class="nt"&gt;--profile&lt;/span&gt; full up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# 3-node Raft HA cluster&lt;/span&gt;
docker compose &lt;span class="nt"&gt;--profile&lt;/span&gt; ha up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  How the WAL pipeline works
&lt;/h2&gt;

&lt;p&gt;Core holds everything in memory and streams every committed write to Keeper nodes over mTLS. Keepers are the only processes that ever touch disk — they drain WAL segments, write them to cold storage, and compact them into snapshots on a configurable schedule.&lt;/p&gt;

&lt;p&gt;When Core restarts it asks a Keeper: &lt;em&gt;"what's my last committed offset?"&lt;/em&gt; — loads the latest snapshot, replays only the delta, and opens for connections. Recovery time is bounded by snapshot interval, not dataset size.&lt;/p&gt;

&lt;p&gt;This separation means you can scale persistence independently, run multiple Keepers for redundancy, and keep Core's write path free of disk latency.&lt;/p&gt;




&lt;p&gt;Still early and under active development — contributions and feedback very welcome.&lt;/p&gt;

&lt;p&gt;⭐ &lt;a href="https://github.com/mneme-labs/mneme" rel="noopener noreferrer"&gt;github.com/mneme-labs/mneme&lt;/a&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>tokio</category>
      <category>redis</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>I Built a Redis Alternative in Rust — MnemeCache</title>
      <dc:creator>VUSAL RAHIMOV</dc:creator>
      <pubDate>Sat, 21 Mar 2026 22:45:49 +0000</pubDate>
      <link>https://dev.to/vusalrahimov/i-built-a-redis-alternative-in-rust-mnemecache-1dd2</link>
      <guid>https://dev.to/vusalrahimov/i-built-a-redis-alternative-in-rust-mnemecache-1dd2</guid>
      <description>&lt;p&gt;Redis is great. But it has problems I could not ignore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;TLS is off by default&lt;/li&gt;
&lt;li&gt;No per-request consistency control&lt;/li&gt;
&lt;li&gt;Basic user permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I built &lt;strong&gt;MnemeCache&lt;/strong&gt; — named after Mnemosyne, &lt;br&gt;
the Greek goddess of memory.&lt;/p&gt;


&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Two types of nodes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core (God Node)&lt;/strong&gt; — holds everything in RAM, serves all requests, never touches disk&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keepers&lt;/strong&gt; — save data to disk via WAL + snapshots, push data back when Core restarts&lt;/p&gt;


&lt;h2&gt;
  
  
  What Makes It Different
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TLS always on&lt;/strong&gt; — auto-generated, no configuration needed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-request consistency:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EVENTUAL  → fastest
QUORUM    → majority must confirm (default)  
ALL       → every node must confirm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Real RBAC&lt;/strong&gt; — admin, readwrite, readonly roles with per-database restrictions&lt;/p&gt;




&lt;h2&gt;
  
  
  Honest Status
&lt;/h2&gt;

&lt;p&gt;Not production ready yet. No published benchmarks. Linux only. Custom protocol so Redis clients do not work.&lt;/p&gt;

&lt;p&gt;I am sharing this for feedback from people who use cache systems daily.&lt;/p&gt;




&lt;p&gt;GitHub → &lt;a href="https://github.com/vusalrahimov/mnemecache" rel="noopener noreferrer"&gt;github.com/vusalrahimov/mnemecache&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thoughts? Leave a comment below.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>redis</category>
      <category>distributedsystems</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
