<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vibhanshu Garg</title>
    <description>The latest articles on DEV Community by Vibhanshu Garg (@vibhanshu_garg_01741359bc).</description>
    <link>https://dev.to/vibhanshu_garg_01741359bc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vibhanshu_garg_01741359bc"/>
    <language>en</language>
    <item>
      <title>Why Pooling Local RAM Beats Buying Bigger Machines</title>
      <dc:creator>Vibhanshu Garg</dc:creator>
      <pubDate>Fri, 19 Dec 2025 16:36:26 +0000</pubDate>
      <link>https://dev.to/vibhanshu_garg_01741359bc/why-pooling-local-ram-beats-buying-bigger-machines-4612</link>
      <guid>https://dev.to/vibhanshu_garg_01741359bc/why-pooling-local-ram-beats-buying-bigger-machines-4612</guid>
      <description>&lt;p&gt;We've all been there.&lt;/p&gt;

&lt;p&gt;You’re running a heavy build, training a model, or processing a massive dataset. Suddenly, everything grinds to a halt. You check &lt;code&gt;htop&lt;/code&gt; and see the red bar of death: &lt;strong&gt;Swap&lt;/strong&gt;. Your 32GB MacBook is gasping for air.&lt;/p&gt;

&lt;p&gt;Meanwhile, your coworker’s laptop is sitting idle on the desk next to you. The office server is humming along at 5% utilization.&lt;/p&gt;

&lt;p&gt;In that moment, the typical engineer’s instinct (including mine) is: &lt;em&gt;"I need a bigger machine."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We instinctively reach for the credit card to upgrade to 64GB or 128GB. But lately, I’ve realized that this instinct isn’t just expensive—it’s &lt;strong&gt;technically backwards&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Bigger is Better" Trap
&lt;/h2&gt;

&lt;p&gt;The conventional wisdom goes like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;More RAM on one machine = better performance&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It feels true because local memory is usually the fastest thing we have. But there’s a catch that I learned the hard way while building distributed systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As you scale up a single machine, you hit a wall.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you buy a massive workstation or a high-memory cloud instance, you aren't just getting more RAM; you're getting more headaches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Bandwidth bottlenecks&lt;/strong&gt;: A single memory bus can only push so much data.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;NUMA penalties&lt;/strong&gt;: On big multi-socket servers, accessing RAM on the "other" CPU plays havoc with latency.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;The Blast Radius&lt;/strong&gt;: If that one expensive machine crashes, your entire workload dies with it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Compare that to the laptop or server sitting next to you. It has its own memory controller, its own bus, and its own CPU.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory bandwidth scales linearly when you go wide.&lt;/strong&gt; Two machines with 64GB RAM have roughly double the aggregate bandwidth of one machine with 128GB.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Don't Share
&lt;/h2&gt;

&lt;p&gt;So if "going wide" is better, why don't we do it for memory?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Because it's hard.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We have great tools for sharing CPU (Kubernetes) and storage (S3, network drives). But memory? Memory has always been trapped inside the box. It’s strictly "local."&lt;/p&gt;

&lt;p&gt;This leads to what I call &lt;strong&gt;Stranded RAM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Right now, if you look around your office or data center, about &lt;strong&gt;60-80% of the total RAM is doing absolutely nothing&lt;/strong&gt;. It's provisioned, paid for, and powered on—but it's completely inaccessible to the one process that actually needs it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It's like having five cars in your driveway but being unable to drive to work because the one you're sitting in is out of gas.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Enter &lt;a href="https://memcloud.vercel.app/" rel="noopener noreferrer"&gt;MemCloud&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I built MemCloud because I wanted to break this limitation. I wanted to treat the RAM across my local network—my laptop, my desktop, my Raspberry Pi cluster—as one single, giant pool of memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MemCloud doesn't replace your local RAM.&lt;/strong&gt; That would be silly; network latency is real.&lt;/p&gt;

&lt;p&gt;Instead, it fits into the "warm" layer of the hierarchy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;CPU Cache&lt;/strong&gt; (Instant)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Local RAM&lt;/strong&gt; (~100 nanoseconds)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;MemCloud / Remote RAM&lt;/strong&gt; (~10-30 microseconds)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;NVMe SSD&lt;/strong&gt; (~100 microseconds)&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Disk&lt;/strong&gt; (Milliseconds)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remote RAM is still &lt;strong&gt;5-10x faster than an NVMe SSD&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For things like build caches, ML embeddings, temporary compiler artifacts, or analytics scratch space, it is the perfect middle ground. You get the speed of memory without the cost of a monster workstation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Numbers
&lt;/h2&gt;

&lt;p&gt;To prove to myself this wasn't just a fun theory, I benchmarked it.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Storage Type&lt;/th&gt;
&lt;th&gt;Latency&lt;/th&gt;
&lt;th&gt;What it feels like&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Local RAM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~0.1 µs&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Pooled RAM (LAN)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~10–30 µs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Extremely Snappy&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;NVMe SSD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~100 µs&lt;/td&gt;
&lt;td&gt;Fast I/O&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cloud Object Store&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~50,000 µs&lt;/td&gt;
&lt;td&gt;Waiting for a download&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When you offload a few gigabytes of "warm" data to a neighbor node, your local machine breathes a sigh of relief. The swap thrashing stops. The UI becomes responsive again.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vision: Infrastructure as a Commons
&lt;/h2&gt;

&lt;p&gt;There is a cost argument here—using what you already have is cheaper than buying new gear. But for me, the exciting part is the shift in mindset.&lt;/p&gt;

&lt;p&gt;When we view memory as a &lt;strong&gt;shared resource&lt;/strong&gt; rather than a private possession of a single kernel, amazing architectures become possible:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;CI pipelines&lt;/strong&gt; can borrow 100GB of RAM from office workstations at night.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Edge devices&lt;/strong&gt; can pool resources to run AI models they couldn't handle individually.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Teams&lt;/strong&gt; can share a massive in-memory dataset without everyone needing a copy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’m building &lt;a href="https://github.com/vibhanshu2001/memcloud" rel="noopener noreferrer"&gt;MemCloud&lt;/a&gt; in Rust because I believe this is where systems are heading. We're moving away from monolithic giants toward collaborative, peer-to-peer swarms.&lt;/p&gt;

&lt;p&gt;If you've ever stared at a "Out of Memory" crash while surrounded by idle computers, you know why this matters.&lt;/p&gt;




&lt;h3&gt;
  
  
  Give &lt;a href="https://memcloud.vercel.app/" rel="noopener noreferrer"&gt;MemCloud&lt;/a&gt; a spin
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  📖 &lt;strong&gt;Read the docs&lt;/strong&gt;: &lt;a href="https://memcloud.vercel.app/docs/cli" rel="noopener noreferrer"&gt;memcloud.vercel.app/docs/cli&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  💻 &lt;strong&gt;Browse the code&lt;/strong&gt;: &lt;a href="https://github.com/vibhanshu2001/memcloud" rel="noopener noreferrer"&gt;github.com/vibhanshu2001/memcloud&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;I'd love to hear if this solves a real headache for you. Let's discuss in the comments!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>memcloud</category>
      <category>ai</category>
      <category>performance</category>
      <category>distributedsystems</category>
    </item>
    <item>
      <title>🔒 Inside MemCloud’s Secure Peer Authentication: How Devices Safely Share RAM Over LAN</title>
      <dc:creator>Vibhanshu Garg</dc:creator>
      <pubDate>Thu, 11 Dec 2025 16:38:34 +0000</pubDate>
      <link>https://dev.to/vibhanshu_garg_01741359bc/inside-memclouds-secure-peer-authentication-how-devices-safely-share-ram-over-lan-f1a</link>
      <guid>https://dev.to/vibhanshu_garg_01741359bc/inside-memclouds-secure-peer-authentication-how-devices-safely-share-ram-over-lan-f1a</guid>
      <description>&lt;p&gt;When I released &lt;strong&gt;MemCloud&lt;/strong&gt; (a distributed RAM engine for macOS &amp;amp; Linux), the biggest question I got was:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Isn’t letting other devices store your RAM risky? How is it secured?”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So here’s a &lt;strong&gt;deep-dive into the authentication, encryption, and trust model&lt;/strong&gt; behind MemCloud — written for engineers who love protocols, threat models, and cryptographic design.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This post focuses purely on the Security &amp;amp; Authentication layer.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
(If you're new to MemCloud, see the intro blog — this assumes familiarity.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpvna8cl5191t82dn68m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpvna8cl5191t82dn68m.png" alt=" " width="628" height="881"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h1&gt;
  
  
  🧩 Threat Model
&lt;/h1&gt;

&lt;p&gt;Before designing the protocol, I outlined real-world LAN threats:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Threat&lt;/th&gt;
&lt;th&gt;Example&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Impersonation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Rogue device pretends to be a trusted peer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MITM Attack&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Attacker intercepts or mutates handshake traffic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Replay Attack&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Reusing old handshake messages&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Unauthorized Memory Access&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Device silently joins the cluster&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Session Hijacking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Stealing or predicting traffic keys&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;MemCloud’s protocol addresses all of them.&lt;/p&gt;


&lt;h1&gt;
  
  
  🛂 1. Persistent Identity Keys (Ed25519)
&lt;/h1&gt;

&lt;p&gt;Every device has a long-term identity keypair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.memcloud/identity_key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Ed25519 (fast + secure)
&lt;/li&gt;
&lt;li&gt;Only used to sign handshake transcripts
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never&lt;/strong&gt; used for encryption
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This behaves like a device certificate without the complexity of PKI.&lt;/p&gt;




&lt;h1&gt;
  
  
  🔐 2. Noise-Style Handshake (XX Pattern)
&lt;/h1&gt;

&lt;p&gt;MemCloud uses an authentication flow inspired by the &lt;strong&gt;Noise Protocol Framework&lt;/strong&gt; — specifically &lt;strong&gt;Noise_XX&lt;/strong&gt;, chosen because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both sides begin &lt;em&gt;unauthenticated&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Supports Trust-On-First-Use (TOFU)&lt;/li&gt;
&lt;li&gt;Offers mutual authentication&lt;/li&gt;
&lt;li&gt;Ensures forward secrecy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simplified handshake:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;A → B : eA, nonceA
B → A : eB, nonceB
A → B : identity proof (encrypted)
B → A : identity proof (encrypted)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;eA&lt;/code&gt; and &lt;code&gt;eB&lt;/code&gt; are ephemeral X25519 public keys.&lt;/p&gt;




&lt;h1&gt;
  
  
  📜 3. Transcript Hashing
&lt;/h1&gt;

&lt;p&gt;Every handshake message is hashed into a transcript:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H = Hash(H || message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replay attacks
&lt;/li&gt;
&lt;li&gt;Downgrade attacks
&lt;/li&gt;
&lt;li&gt;Cross-session confusion
&lt;/li&gt;
&lt;li&gt;MITM tampering
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The transcript is later fed into key derivation.&lt;/p&gt;




&lt;h1&gt;
  
  
  🧾 4. Encrypted Identity Proofs
&lt;/h1&gt;

&lt;p&gt;Once ephemeral keys are exchanged, both sides compute:&lt;/p&gt;

&lt;p&gt;shared_secret = DH(eA, eB)&lt;/p&gt;

&lt;p&gt;Using this encryption key, each device sends:&lt;/p&gt;

&lt;p&gt;signature = Sign(TranscriptHash, IdentityKey)&lt;/p&gt;

&lt;p&gt;This signature:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;is &lt;strong&gt;encrypted&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;is &lt;strong&gt;bound to the transcript&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;cannot be replayed&lt;/li&gt;
&lt;li&gt;proves identity with forward secrecy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If signature verification fails → connection closed immediately.&lt;/p&gt;




&lt;h1&gt;
  
  
  🔑 5. Session Key Derivation (HKDF)
&lt;/h1&gt;

&lt;p&gt;Final traffic keys are derived using:&lt;/p&gt;

&lt;p&gt;session_key = HKDF(shared_secret + transcript_hash)&lt;/p&gt;

&lt;p&gt;Properties:&lt;/p&gt;

&lt;p&gt;✔ Unique keys per session&lt;br&gt;&lt;br&gt;
✔ Handshake keys ≠ traffic keys&lt;br&gt;&lt;br&gt;
✔ Forward secrecy (ephemeral DH)&lt;br&gt;&lt;br&gt;
✔ Resistant to key compromise  &lt;/p&gt;

&lt;p&gt;Traffic encryption uses:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;ChaCha20-Poly1305 AEAD&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Perfect for high-speed LAN communication.&lt;/p&gt;




&lt;h1&gt;
  
  
  👁 6. Trust-On-First-Use (TOFU)
&lt;/h1&gt;

&lt;p&gt;Requesting device: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffx95h53j85tyfiar4mel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffx95h53j85tyfiar4mel.png" alt="Requesting device flow of consent" width="638" height="666"&gt;&lt;/a&gt;&lt;br&gt;
On first contact, the user must explicitly approve the peer:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3ess3wlub8yu488mbfj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3ess3wlub8yu488mbfj.jpeg" alt="Peer device authenticating the request access" width="787" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Trusted peers are stored in:&lt;br&gt;
&lt;code&gt;~/.memcloud/trusted_devices.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Future connections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still cryptographically authenticated
&lt;/li&gt;
&lt;li&gt;Skip interactive approval
&lt;/li&gt;
&lt;li&gt;Cannot be silently spoofed or replaced
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  🚫 What Happens When Auth Fails?
&lt;/h1&gt;

&lt;p&gt;MemCloud rejects the session if:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;identity signature fails
&lt;/li&gt;
&lt;li&gt;transcript mismatch occurs
&lt;/li&gt;
&lt;li&gt;handshake is malformed
&lt;/li&gt;
&lt;li&gt;device is untrusted
&lt;/li&gt;
&lt;li&gt;the consent layer blocks authorization
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rejected peers &lt;strong&gt;cannot&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read your RAM
&lt;/li&gt;
&lt;li&gt;receive block data
&lt;/li&gt;
&lt;li&gt;join the cluster
&lt;/li&gt;
&lt;li&gt;impersonate a previous peer
&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  🔍 Why Not TLS?
&lt;/h1&gt;

&lt;p&gt;TLS is powerful, but not ideal for a P2P LAN memory engine.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌ Requires PKI or certificate distribution
&lt;/h3&gt;

&lt;p&gt;MemCloud is designed to be zero-config.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌ Not optimized for TOFU
&lt;/h3&gt;

&lt;p&gt;Noise is.&lt;/p&gt;

&lt;h3&gt;
  
  
  ❌ More overhead for LAN P2P
&lt;/h3&gt;

&lt;p&gt;MemCloud aims for &lt;strong&gt;sub-10ms latency&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✔ Noise-style handshake advantages:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Mutual authentication&lt;/li&gt;
&lt;li&gt;Identity hiding&lt;/li&gt;
&lt;li&gt;Forward secrecy&lt;/li&gt;
&lt;li&gt;TOFU support&lt;/li&gt;
&lt;li&gt;Lightweight binary protocol&lt;/li&gt;
&lt;li&gt;Perfect for peer-to-peer systems&lt;/li&gt;
&lt;/ul&gt;




&lt;h1&gt;
  
  
  🧪 Fuzzing &amp;amp; Attack Testing
&lt;/h1&gt;

&lt;p&gt;I tested the handshake against:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attack Attempt&lt;/th&gt;
&lt;th&gt;Result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Replay&lt;/td&gt;
&lt;td&gt;Blocked via transcript mismatch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MITM&lt;/td&gt;
&lt;td&gt;Blocked (identity proof mismatch)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Impersonation&lt;/td&gt;
&lt;td&gt;Blocked (signature invalid)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Downgrade attempt&lt;/td&gt;
&lt;td&gt;Impossible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Payload tampering&lt;/td&gt;
&lt;td&gt;Blocked (MAC failure)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So far the protocol has held up extremely well in real-world LAN conditions.&lt;/p&gt;




&lt;h1&gt;
  
  
  🛠 Security Roadmap
&lt;/h1&gt;

&lt;p&gt;Planned improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔄 Trust revocation broadcasting
&lt;/li&gt;
&lt;li&gt;🖥 GUI trust manager
&lt;/li&gt;
&lt;li&gt;🛡 Optional hardware-backed identity keys
&lt;/li&gt;
&lt;li&gt;🔁 Session resumption
&lt;/li&gt;
&lt;li&gt;📦 Encrypted replication across peers
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Contributors welcomed.&lt;/p&gt;




&lt;h1&gt;
  
  
  📦 Source Code
&lt;/h1&gt;

&lt;p&gt;👉 &lt;strong&gt;Authentication Code:&lt;/strong&gt; &lt;code&gt;/memnode/src/net/auth&lt;/code&gt;&lt;br&gt;&lt;br&gt;
👉 &lt;strong&gt;Main Repo:&lt;/strong&gt; &lt;a href="https://github.com/vibhanshu2001/memcloud" rel="noopener noreferrer"&gt;https://github.com/vibhanshu2001/memcloud&lt;/a&gt;&lt;br&gt;&lt;br&gt;
👉 &lt;strong&gt;Documentation:&lt;/strong&gt; &lt;a href="https://memcloud.vercel.app/docs/cli" rel="noopener noreferrer"&gt;https://memcloud.vercel.app/docs/cli&lt;/a&gt;&lt;br&gt;
👉 &lt;strong&gt;CLI Trust Manager:&lt;/strong&gt; &lt;code&gt;memcli trust&lt;/code&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  ❤️ Final Thoughts
&lt;/h1&gt;

&lt;p&gt;MemCloud may look simple on the surface:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Devices sharing RAM over LAN.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But underneath is a &lt;strong&gt;carefully engineered secure protocol&lt;/strong&gt; designed to make that actually safe.&lt;/p&gt;

&lt;p&gt;If you'd like a follow-up on the:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;P2P binary protocol
&lt;/li&gt;
&lt;li&gt;memory isolation model
&lt;/li&gt;
&lt;li&gt;quota enforcement
&lt;/li&gt;
&lt;li&gt;zero-copy streaming design
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just let me know!&lt;/p&gt;

</description>
      <category>memcloud</category>
      <category>rampooling</category>
      <category>programming</category>
      <category>rust</category>
    </item>
    <item>
      <title>🚀 Introducing MemCloud — Pool Unused RAM Across Machines on Your LAN (Rust, Zero-Config)</title>
      <dc:creator>Vibhanshu Garg</dc:creator>
      <pubDate>Sun, 07 Dec 2025 20:05:35 +0000</pubDate>
      <link>https://dev.to/vibhanshu_garg_01741359bc/introducing-memcloud-pool-unused-ram-across-machines-on-your-lan-rust-zero-config-5epn</link>
      <guid>https://dev.to/vibhanshu_garg_01741359bc/introducing-memcloud-pool-unused-ram-across-machines-on-your-lan-rust-zero-config-5epn</guid>
      <description>&lt;p&gt;Hey DEV community! 👋  &lt;/p&gt;

&lt;p&gt;I’ve been working on a side project that turned into something surprisingly useful, fun, and very &lt;em&gt;Rust-y&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Meet &lt;a href="https://github.com/vibhanshu2001/memcloud" rel="noopener noreferrer"&gt;&lt;em&gt;MemCloud&lt;/em&gt;&lt;/a&gt; — a distributed in-memory data store that lets multiple machines on your LAN pool their RAM into a single ephemeral “memory cloud.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6c7zspqf0eh0zn0kcm4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv6c7zspqf0eh0zn0kcm4.png" alt="MemCloud Diagram" width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Use case:&lt;/strong&gt; Have a Mac, a Linux machine, and a spare mini-PC sitting around?&lt;br&gt;&lt;br&gt;
MemCloud turns them into &lt;em&gt;one big RAM cache&lt;/em&gt; — automatically.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  💡 Why I Built This
&lt;/h2&gt;

&lt;p&gt;I often run ML experiments, dev servers, and log processors that overflow RAM on one machine while another machine sits idle right next to it.&lt;/p&gt;

&lt;p&gt;I wanted a tool that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;works offline
&lt;/li&gt;
&lt;li&gt;runs locally
&lt;/li&gt;
&lt;li&gt;requires zero configuration
&lt;/li&gt;
&lt;li&gt;discovers peers automatically
&lt;/li&gt;
&lt;li&gt;lets me store/load data across devices in milliseconds
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I built &lt;strong&gt;MemCloud&lt;/strong&gt;, a tiny Rust daemon + CLI + SDKs that create a peer-to-peer RAM mesh on your LAN.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚡️ Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🕸️ P2P RAM Pooling
&lt;/h3&gt;

&lt;p&gt;Every memnode contributes its RAM to the cluster.&lt;br&gt;&lt;br&gt;
A write on Machine A can be read from Machine B in under &lt;strong&gt;10ms&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔍 Zero-Config Discovery (mDNS)
&lt;/h3&gt;

&lt;p&gt;Just start the daemon — peers auto-discover each other.&lt;br&gt;&lt;br&gt;
No IPs, no ports, no YAML files, no Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  🌐 Works Fully Offline
&lt;/h3&gt;

&lt;p&gt;No cloud. No accounts. No central server.&lt;/p&gt;

&lt;h3&gt;
  
  
  ⛓️ CLI + SDKs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;memcli&lt;/code&gt; for terminal workflows
&lt;/li&gt;
&lt;li&gt;Rust SDK for systems work
&lt;/li&gt;
&lt;li&gt;TypeScript SDK for JS/Node devs
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔑 Two Storage Modes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Block Store:&lt;/strong&gt; raw bytes &amp;amp; streams
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key-Value Store:&lt;/strong&gt; Redis-style &lt;code&gt;set&lt;/code&gt; / &lt;code&gt;get&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧱 Architecture
&lt;/h2&gt;

&lt;p&gt;You can view the architecture diagrams here:&lt;br&gt;&lt;br&gt;
➡️ &lt;a href="https://github.com/vibhanshu2001/memcloud/blob/main/ARCHITECTURE.md" rel="noopener noreferrer"&gt;https://github.com/vibhanshu2001/memcloud/blob/main/ARCHITECTURE.md&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each node runs a small daemon (&lt;code&gt;memnode&lt;/code&gt;).&lt;br&gt;&lt;br&gt;
SDKs and CLI talk only to the local daemon.&lt;br&gt;&lt;br&gt;
The daemon handles routing and storage across peers.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  ⚙️ Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Quick Install (macOS &amp;amp; Linux)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://raw.githubusercontent.com/vibhanshu2001/memcloud/main/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🛠️ Build from Source
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/vibhanshu2001/memcloud.git
&lt;span class="nb"&gt;cd &lt;/span&gt;memcloud
cargo build &lt;span class="nt"&gt;--release&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  👨‍💻 Open Source
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/vibhanshu2001/memcloud" rel="noopener noreferrer"&gt;https://github.com/vibhanshu2001/memcloud&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Docs:&lt;/strong&gt; &lt;a href="https://memcloud.vercel.app/" rel="noopener noreferrer"&gt;https://memcloud.vercel.app/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;NPM:&lt;/strong&gt; &lt;a href="https://www.npmjs.com/package/memcloud" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/memcloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feurxldinfg9osn9izaur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feurxldinfg9osn9izaur.png" alt=" " width="800" height="902"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I’d love feedback on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;performance ideas
&lt;/li&gt;
&lt;li&gt;networking improvements
&lt;/li&gt;
&lt;li&gt;memory/eviction strategies
&lt;/li&gt;
&lt;li&gt;real-world use cases
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!&lt;br&gt;&lt;br&gt;
&lt;strong&gt;— Vibhanshu&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>distributedsystems</category>
      <category>opensource</category>
      <category>networking</category>
      <category>memcloud</category>
    </item>
  </channel>
</rss>
