<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Haven Messenger</title>
    <description>The latest articles on DEV Community by Haven Messenger (@havenmessenger).</description>
    <link>https://dev.to/havenmessenger</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/havenmessenger"/>
    <language>en</language>
    <item>
      <title>The Signal Double Ratchet Algorithm, Explained</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Fri, 15 May 2026 08:20:45 +0000</pubDate>
      <link>https://dev.to/havenmessenger/the-signal-double-ratchet-algorithm-explained-43n9</link>
      <guid>https://dev.to/havenmessenger/the-signal-double-ratchet-algorithm-explained-43n9</guid>
      <description>&lt;p&gt;The Double Ratchet is the algorithm that powers Signal, WhatsApp, Matrix's Olm, and most modern 1:1 encrypted messaging. It does something unusual: it gives you forward secrecy and post-compromise security at the same time. Here is how it actually works.&lt;/p&gt;

&lt;p&gt;When most people hear "end-to-end encrypted," they imagine a single shared key that both parties use to encrypt and decrypt messages. That model exists — it is what PGP does for email — but it has a serious limitation: if the key is ever exposed, every message encrypted with it is exposed too, in both directions, forever.&lt;/p&gt;

&lt;p&gt;The Double Ratchet, designed by Trevor Perrin and Moxie Marlinspike around 2013, solves this by ensuring that &lt;strong&gt;every single message uses a different encryption key&lt;/strong&gt;, and that those keys cannot be reconstructed from each other in either direction. The result is a protocol where a key compromise leaks at most a single message — and recovers automatically once both parties exchange anything new.&lt;/p&gt;

&lt;h2&gt;
  
  
  Forward Secrecy and Post-Compromise Security
&lt;/h2&gt;

&lt;p&gt;These two properties are easy to confuse. They protect against opposite directions on the timeline.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Forward secrecy&lt;/strong&gt; means a key compromise today cannot decrypt messages from yesterday. Past traffic stays safe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-compromise security&lt;/strong&gt; (sometimes called future secrecy or self-healing) means a key compromise today cannot decrypt messages from tomorrow, once new key material flows. Future traffic recovers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most protocols give you one. TLS 1.3 with ephemeral Diffie-Hellman gives forward secrecy for the session, but if your long-term key is stolen, future sessions are decryptable. PGP gives you neither — the same private key opens every message you have ever received.&lt;/p&gt;

&lt;p&gt;The Double Ratchet gives both. The trick is to use two interlocking ratchets running in parallel.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Symmetric Ratchet: New Key Every Message
&lt;/h2&gt;

&lt;p&gt;The simpler of the two ratchets is a chain of symmetric key derivations. Both parties start with a shared &lt;strong&gt;chain key&lt;/strong&gt;. To encrypt a message, the sender feeds the chain key into a KDF — typically HKDF — to produce two outputs: a fresh message key for this message, and a new chain key for the next.&lt;/p&gt;

&lt;p&gt;The crucial property is one-way: given a chain key, you can produce all future message keys, but given a message key, you cannot work backwards to recover the chain key. So if an attacker steals your message key for message #42, they can decrypt that one message and nothing else. Once it is used, your device deletes it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The name "ratchet" comes from mechanics: a gear that turns one way and cannot turn back. Each symmetric derivation steps the chain forward irreversibly. The state moves; the history is unrecoverable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This alone gives forward secrecy: a key stolen at message #42 leaves messages #1 through #41 safe, because their keys were destroyed long ago.&lt;/p&gt;

&lt;p&gt;What it does &lt;em&gt;not&lt;/em&gt; give is post-compromise security. If an attacker steals the chain key itself, they can derive every future message key. The chain has no way to "heal."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Diffie-Hellman Ratchet: New Shared Secret Per Round Trip
&lt;/h2&gt;

&lt;p&gt;The second ratchet runs on top of the first and provides the healing. Every time a party sends a message, they include a fresh ephemeral Diffie-Hellman public key in the header. When the other party receives it, they combine it with their own latest DH private key to produce a fresh shared secret — and they mix that secret into their chain key.&lt;/p&gt;

&lt;p&gt;The next time they send a message, they generate a new DH key pair of their own, attach the public key, and the cycle repeats. Each round trip injects fresh randomness sampled on a different device into the key schedule.&lt;/p&gt;

&lt;p&gt;Consider what this means for an attacker who has stolen your current chain key. As soon as a new DH ratchet step happens — typically the next message in either direction — the chain key is replaced by a value derived from a fresh ephemeral private key the attacker does not have. Their access window closes. Subsequent messages are again unreadable to them.&lt;/p&gt;

&lt;p&gt;This is the post-compromise security guarantee. It does not require detection of the compromise or any action from the user. It happens automatically as a side effect of normal messaging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It Together: How a Single Message Flows
&lt;/h2&gt;

&lt;p&gt;To send a message in an established Double Ratchet session, the sender's code does roughly this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If this is the first message of a new "sending chain" (i.e. the previous message was inbound), generate a new DH key pair and derive a new root key + sending chain key from the latest DH shared secret.&lt;/li&gt;
&lt;li&gt;Derive a message key from the current sending chain key, advancing the chain.&lt;/li&gt;
&lt;li&gt;Encrypt the plaintext with the message key (typically AES-256 in CBC or GCM mode, with HMAC-SHA256 authentication).&lt;/li&gt;
&lt;li&gt;Send the ciphertext along with the current DH public key and a counter indicating message position in the chain.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The receiver does the inverse: if the incoming DH public key is new, run a DH ratchet step to derive a new receiving chain; then derive message keys forward in the chain until they reach the counter on the incoming message; decrypt; delete the used message key.&lt;/p&gt;

&lt;h2&gt;
  
  
  Out-of-Order and Lost Messages
&lt;/h2&gt;

&lt;p&gt;Real networks deliver messages out of order, drop them, or duplicate them. The Double Ratchet handles this with a small skipped-message keys cache. If message #42 arrives before message #41, the receiver derives keys for #41 and #42, decrypts #42, and stores #41's key in a bounded cache to wait for the delayed message.&lt;/p&gt;

&lt;p&gt;The cache has a hard size and time limit so that an attacker cannot force unbounded memory growth by simply never delivering certain messages. Signal's reference implementation uses a default of 1000 cached skipped keys, with old ones deleted as new ones arrive.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Double Ratchet Does Not Solve
&lt;/h2&gt;

&lt;p&gt;For all its elegance, the Double Ratchet is only one piece of a complete messaging protocol. It assumes you already have a shared initial root key with the other party — typically established by Signal's X3DH key agreement, which uses long-term identity keys and pre-published one-time keys to bootstrap a session even if the recipient is offline.&lt;/p&gt;

&lt;p&gt;It also does not handle groups. Group chats need a different construction, because pairwise Double Ratchets between every member scale poorly and don't agree on a shared state. This is the problem MLS (RFC 9420) was designed to solve.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;PGP&lt;/th&gt;
&lt;th&gt;TLS 1.3&lt;/th&gt;
&lt;th&gt;Double Ratchet&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Forward secrecy&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (per session)&lt;/td&gt;
&lt;td&gt;Yes (per message)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Post-compromise security&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Async (recipient offline)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes (with X3DH)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Group messaging&lt;/td&gt;
&lt;td&gt;Awkward&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Use MLS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Identity Verification Is Still Your Job
&lt;/h2&gt;

&lt;p&gt;The Double Ratchet protects you against passive eavesdroppers and against an attacker who steals key material at one point in time. It does not protect you against an active attacker who controls the channel from the beginning of the session and feeds each side a different identity key.&lt;/p&gt;

&lt;p&gt;This is why Signal, WhatsApp, and most modern messengers support safety number or fingerprint verification — comparing a hash of both parties' long-term identity keys through some out-of-band channel. TOFU (trust on first use) is the common default, but it only works if the first use was not already compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Algorithm Won
&lt;/h2&gt;

&lt;p&gt;The Double Ratchet was not the first attempt to combine forward secrecy with key continuity. OTR (Off-the-Record) Messaging, from 2004, pioneered the idea of using ephemeral DH keys for forward secrecy in chat. But OTR was synchronous and didn't handle offline recipients well, which made it unsuitable for mobile messaging.&lt;/p&gt;

&lt;p&gt;The Double Ratchet's contribution was making the construction work asynchronously, with no requirement that both parties be online at once, and with bounded state. That made it practical for the mobile-first messaging era, which is why it ended up inside the most-used encrypted apps in the world.&lt;/p&gt;

&lt;p&gt;The specification is published openly. The Signal Protocol documentation contains the full algorithm with test vectors, and multiple independent implementations exist in Rust, Java, Swift, and C++. The combination of strong properties, public specification, and broad real-world deployment is a rare thing in cryptography. It is worth understanding even if you never implement it yourself, because the threat model it addresses is the one every modern messenger must claim to address.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/double-ratchet-algorithm-explained/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>HKDF: Turning One Secret Into Many, Correctly</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Wed, 13 May 2026 08:18:44 +0000</pubDate>
      <link>https://dev.to/havenmessenger/hkdf-turning-one-secret-into-many-correctly-dja</link>
      <guid>https://dev.to/havenmessenger/hkdf-turning-one-secret-into-many-correctly-dja</guid>
      <description>&lt;p&gt;A common task in applied cryptography looks deceptively simple: "I have a shared secret. I need two keys from it — one for encryption, one for authentication." The wrong way to solve this is to hash the secret and slice it in half. The right way is HKDF, and the reason it exists tells you something important about why amateur cryptography breaks.&lt;/p&gt;

&lt;p&gt;HKDF is the HMAC-based Key Derivation Function, specified in &lt;a href="https://datatracker.ietf.org/doc/html/rfc5869" rel="noopener noreferrer"&gt;RFC 5869&lt;/a&gt; by Hugo Krawczyk in 2010. It's the key derivation function used in TLS 1.3, in the Signal Protocol, in Noise, in IKEv2, and in roughly every modern protocol designed after about 2012. If you do anything with shared secrets in a cryptographic context, you almost certainly want HKDF.&lt;/p&gt;

&lt;p&gt;The function does one specific job: take some input keying material (which may have varying levels of entropy and structure), plus optional context information, and produce one or more independent-looking output keys of any requested length. Despite that being a narrow problem, the design choices in HKDF matter a lot.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem It Solves
&lt;/h2&gt;

&lt;p&gt;Suppose you've just completed a Diffie-Hellman key exchange. You have a shared secret — let's call it Z — that's the same on both ends. Z is 32 bytes long. Now you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A 32-byte AES key for encrypting messages&lt;/li&gt;
&lt;li&gt;A 32-byte HMAC key for authenticating messages&lt;/li&gt;
&lt;li&gt;A 16-byte IV seed for some specific cipher mode&lt;/li&gt;
&lt;li&gt;Possibly later: more keys for new sessions, key rotation, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What you don't want to do is use Z directly as your AES key. Why? Because Z is the output of a Diffie-Hellman operation, and DH outputs are &lt;em&gt;not uniformly random&lt;/em&gt;. The values live in a specific algebraic structure, and while they're computationally indistinguishable from random for adversaries who can't break DH, they may have statistical biases that real cryptographic operations rely on not existing.&lt;/p&gt;

&lt;p&gt;You also don't want to derive your two 32-byte keys by splitting &lt;code&gt;SHA256(Z)&lt;/code&gt; in half. That's the kind of thing that looks fine and is brittle in subtle ways — for instance, knowing one half doesn't directly reveal the other, but the construction has no formal security argument and breaks if the hash isn't modeled as a random oracle.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The actual goal:&lt;/strong&gt; Given an input that may have any amount of usable entropy (concentrated or spread out), produce arbitrary numbers of independent-looking keys whose security is reducible to the input's underlying entropy. The reduction needs to be tight, and it needs to hold across many output keys.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Extract-Then-Expand
&lt;/h2&gt;

&lt;p&gt;HKDF achieves this in two phases. The separation is the conceptual core of the design.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Extract
&lt;/h3&gt;

&lt;p&gt;The Extract step compresses the input keying material into a fixed-size, uniformly-random-looking value called a &lt;strong&gt;pseudorandom key&lt;/strong&gt; (PRK). It uses HMAC for this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PRK = HMAC-Hash(salt, IKM)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where IKM is the input keying material (your DH output Z, for example) and &lt;code&gt;salt&lt;/code&gt; is a non-secret value that helps remove structural biases. The salt is optional in HKDF; if you don't supply one, an all-zeros string of hash-length is used.&lt;/p&gt;

&lt;p&gt;The cryptographic argument for Extract is what makes HKDF rigorous. It assumes HMAC behaves like a "computational extractor" — given an input with sufficient entropy, the output is computationally indistinguishable from a uniform random string of hash-output length. This is a stronger and better-studied assumption than treating SHA-256 as a random oracle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Expand
&lt;/h3&gt;

&lt;p&gt;Once you have a uniform-looking PRK, the Expand step generates output keys of any length you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;T(0) = empty string
T(1) = HMAC-Hash(PRK, T(0) | info | 0x01)
T(2) = HMAC-Hash(PRK, T(1) | info | 0x02)
T(3) = HMAC-Hash(PRK, T(2) | info | 0x03)
...
OKM  = T(1) | T(2) | T(3) | ... truncated to L bytes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where &lt;code&gt;info&lt;/code&gt; is an optional context string that binds the output to a specific purpose. The output length L can be up to 255 times the hash output length (8,160 bytes for SHA-256).&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;info&lt;/code&gt; parameter is how you derive multiple independent keys from the same shared secret. By using different info strings — for example, &lt;code&gt;"encrypt"&lt;/code&gt; for the AES key and &lt;code&gt;"auth"&lt;/code&gt; for the HMAC key — you get outputs that are computationally independent. An attacker who somehow learns one derived key gains no information about keys derived under different info strings.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Concrete Example
&lt;/h2&gt;

&lt;p&gt;Using HKDF-SHA256 in Python with the &lt;code&gt;cryptography&lt;/code&gt; library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cryptography.hazmat.primitives&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hashes&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;cryptography.hazmat.primitives.kdf.hkdf&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;HKDF&lt;/span&gt;

&lt;span class="n"&gt;shared_secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# 32 bytes from DH
&lt;/span&gt;&lt;span class="n"&gt;salt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;specific-session-salt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;encrypt_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HKDF&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;algorithm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;hashes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SHA256&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;salt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;haven-session-v1-encrypt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;derive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;shared_secret&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;auth_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;HKDF&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;algorithm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;hashes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;SHA256&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;salt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;haven-session-v1-auth&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;derive&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;shared_secret&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two HKDF calls with different &lt;code&gt;info&lt;/code&gt; strings produce two independent-looking keys. Note that this is conceptually equivalent to one Extract followed by two Expands; libraries often expose the combined API for convenience.&lt;/p&gt;

&lt;h2&gt;
  
  
  What HKDF Is Not
&lt;/h2&gt;

&lt;p&gt;HKDF is frequently confused with password-based key derivation functions like PBKDF2, scrypt, and Argon2. They look superficially similar but solve different problems.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;th&gt;Designed For&lt;/th&gt;
&lt;th&gt;Slow on Purpose?&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;HKDF&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deriving keys from &lt;em&gt;cryptographic&lt;/em&gt; input (DH outputs, hash outputs, other uniform-ish material)&lt;/td&gt;
&lt;td&gt;No — fast by design&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;PBKDF2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deriving keys from &lt;em&gt;passwords&lt;/em&gt; (low-entropy human input)&lt;/td&gt;
&lt;td&gt;Yes — iteration count slows attacks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;scrypt&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Same as PBKDF2, additionally memory-hard&lt;/td&gt;
&lt;td&gt;Yes — slow + memory-expensive&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Argon2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Modern password KDF — winner of 2015 Password Hashing Competition&lt;/td&gt;
&lt;td&gt;Yes — tunable time and memory cost&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The mistake to avoid: using HKDF to derive keys from passwords. HKDF is fast — which means an attacker who steals your stored "key derivation" output can brute-force the password at full speed. For passwords, you need a slow KDF (Argon2 or scrypt). For cryptographic key material that already has high entropy, you want a fast KDF (HKDF), because slowing it down provides no security benefit and adds latency to every operation.&lt;/p&gt;

&lt;p&gt;It's also legitimate to chain them: use Argon2 to convert a password into a high-entropy key, then use HKDF to derive multiple sub-keys from that result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfalls
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Forgetting the info parameter
&lt;/h3&gt;

&lt;p&gt;The most common HKDF mistake is calling it with an empty &lt;code&gt;info&lt;/code&gt; string and deriving multiple keys by changing only the output length or the salt. This works but couples your security to operational discipline you may not have. Use distinct, structured info strings — something like &lt;code&gt;"protocol-name v1 purpose"&lt;/code&gt; — and the keys are guaranteed independent.&lt;/p&gt;

&lt;h3&gt;
  
  
  Salt confusion
&lt;/h3&gt;

&lt;p&gt;Salt in HKDF is not the same as salt in password hashing. In password hashing, the salt is critical for breaking precomputation attacks. In HKDF, the salt is for entropy extraction — it helps when the input keying material has structural biases. A constant salt is fine if your IKM is already high-entropy; a random per-session salt is appropriate if it isn't.&lt;/p&gt;

&lt;h3&gt;
  
  
  Confusing IKM and the PRK
&lt;/h3&gt;

&lt;p&gt;Some APIs let you call Expand directly, skipping Extract. This is correct &lt;em&gt;only&lt;/em&gt; if your input is already a uniformly-random key (e.g., the output of a previous HKDF, or a value from a CSPRNG). It is wrong if your input is a DH output or other structured cryptographic material — in that case, you need Extract first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using the wrong hash
&lt;/h3&gt;

&lt;p&gt;HKDF can use any HMAC-compatible hash. SHA-256 is the most common choice. SHA-384 and SHA-512 are appropriate if you need longer output. SHA-1 still works mathematically but signals that the rest of the system is also outdated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where HKDF Sits in Real Protocols
&lt;/h2&gt;

&lt;p&gt;In TLS 1.3, HKDF replaces the ad-hoc key derivation of TLS 1.2 with a clean, formally-analyzed structure. Every TLS 1.3 session derives a hierarchy of secrets via HKDF: master secrets, traffic secrets, exporter secrets, and so on. Each is derived with a specific &lt;code&gt;info&lt;/code&gt; label that documents its purpose.&lt;/p&gt;

&lt;p&gt;In the Signal Protocol's Double Ratchet, HKDF is used at every step: deriving new chain keys, deriving new message keys, deriving root keys from DH outputs. The protocol's forward secrecy properties depend on HKDF producing genuinely independent keys at each ratchet step.&lt;/p&gt;

&lt;p&gt;In the MLS protocol, HKDF underpins the entire tree-based key schedule. The cryptographic safety arguments for MLS group operations route through HKDF's properties.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The general lesson: any time you find yourself thinking "I have a secret, I want to turn it into other secrets," that's a key derivation function. There is exactly one correct answer to that problem for high-entropy input, and HKDF is its name.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/hkdf-key-derivation-explained/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>EU Chat Control: What Client-Side Scanning Actually Means for Encryption</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Tue, 12 May 2026 08:19:47 +0000</pubDate>
      <link>https://dev.to/havenmessenger/eu-chat-control-what-client-side-scanning-actually-means-for-encryption-1748</link>
      <guid>https://dev.to/havenmessenger/eu-chat-control-what-client-side-scanning-actually-means-for-encryption-1748</guid>
      <description>&lt;p&gt;The EU's proposed Chat Control regulation would require messaging providers to scan your messages for illegal content before encryption, on your device. Proponents say it doesn't break end-to-end encryption. Every cryptographer who has studied the proposal disagrees. Here's why, and what it would actually require in practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  What End-to-End Encryption Actually Guarantees
&lt;/h2&gt;

&lt;p&gt;End-to-end encryption means messages are encrypted on the sender's device and can only be decrypted by the intended recipient(s). No intermediate server can read the plaintext. The encryption and decryption happen only at the endpoints: your device and theirs.&lt;/p&gt;

&lt;p&gt;This guarantee depends on exactly one thing: the plaintext is only ever visible on devices that hold the private decryption key. The moment plaintext is made available to any additional process — even one running locally on your device — that guarantee is weakened, because that additional process can send its findings to a third party.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The Core Technical Claim&lt;/strong&gt;: Chat Control's proponents argue that scanning on the device, before encryption, doesn't compromise E2EE because the encrypted message in transit is still unreadable. Cryptographers respond: if your device is required to run surveillance software on your messages before sending them, it doesn't matter that the message is encrypted afterward. The plaintext was already inspected.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How Client-Side Scanning Works
&lt;/h2&gt;

&lt;p&gt;There are three main technical approaches to client-side scanning (CSS) for CSAM detection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exact-match hash comparison (PhotoDNA-style).&lt;/strong&gt; A database of known CSAM images is hashed using a perceptual hash function. When you share an image, the client computes a hash and compares it against the database. This approach only detects known material; novel images are never flagged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Perceptual hashing (NeuralHash-style).&lt;/strong&gt; Apple announced and then withdrew a system called CSAM Detection in 2021 that used neural hash matching. Security researchers demonstrated collisions within days: non-CSAM images that produced the same hash as flagged content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine learning classifiers.&lt;/strong&gt; A neural network model classifies images or text as likely to contain illegal content. This can detect novel material but has significant false positive rates that become severe at internet scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  The False Positive Problem at Scale
&lt;/h2&gt;

&lt;p&gt;Consider a classifier with 99.9% accuracy — flagging a message incorrectly only 1 time in 1,000. Applied to a platform with 500 million daily active users sending an average of 10 messages per day, that produces 5 million false reports per day. The human review pipeline that would need to process those reports does not exist and cannot realistically be built.&lt;/p&gt;

&lt;p&gt;Either false positives are forwarded to law enforcement — catastrophic for the millions of innocents falsely flagged — or they're filtered by automated systems before human review, which means the oversight is automated rather than human. Neither outcome is acceptable, and the tension between sensitivity and specificity cannot be resolved by building better classifiers. It's a consequence of operating at internet scale.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A system that must scan all private communications to find the small fraction that are illegal will inevitably surveil the overwhelming majority that are not. The architecture of mass surveillance and the architecture of targeted CSAM detection are, technically, the same thing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Scope Expansion Problem
&lt;/h2&gt;

&lt;p&gt;Once the infrastructure for mandatory client-side scanning exists, its scope is determined by legislative amendment, not technical constraints. A scanning system built to detect CSAM hashes can be retargeted to flag any content whose hash is on an updated list.&lt;/p&gt;

&lt;p&gt;The EU's Chat Control proposal includes CSAM detection and, in its extended provisions, scanning for "grooming" — text-based detection of communication patterns. Text scanning is necessarily more context-dependent and error-prone than image hashing, and the definition of which text patterns constitute grooming is inherently political.&lt;/p&gt;

&lt;p&gt;The technical architecture does not distinguish between scanning for child abuse material and scanning for political dissent, journalist sources, or labor organizing. The distinction exists only in the current legal text — and legal text changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apple's Retreat and What It Means
&lt;/h2&gt;

&lt;p&gt;Apple announced its CSAM Detection system in August 2021. Within days, researchers had demonstrated hash collision attacks. The system was never deployed and was formally abandoned in December 2022.&lt;/p&gt;

&lt;p&gt;This matters for the EU debate because Apple had access to excellent cryptographic engineering talent and a genuine incentive to make the technology work — and they couldn't build a system that withstood scrutiny. The EU regulation does not specify a technical approach; it mandates the outcome and leaves implementation to service providers. This is not a soluble engineering problem dressed up as a policy question.&lt;/p&gt;

&lt;h2&gt;
  
  
  Legislative Status and Industry Response
&lt;/h2&gt;

&lt;p&gt;As of early 2026, Chat Control 2.0 has been stalled in the EU Council. Germany, Austria, and several other member states have indicated they will not support a mandatory scanning provision applying to encrypted communications. The European Parliament's LIBE committee voted against the proposal in 2023.&lt;/p&gt;

&lt;p&gt;Signal's president Meredith Whittaker stated in 2024 that Signal would cease operations in any EU jurisdiction where Chat Control became law rather than implement client-side scanning. Threema issued a similar statement.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Users Now
&lt;/h2&gt;

&lt;p&gt;Chat Control has not passed. No messaging app is currently required to implement client-side scanning under EU law.&lt;/p&gt;

&lt;p&gt;The longer-term implications matter for evaluating messaging platforms for sensitive communications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Has the service made a public commitment about how they would respond to mandatory scanning requirements?&lt;/li&gt;
&lt;li&gt;Is the client software open source, allowing independent verification that scanning is not occurring?&lt;/li&gt;
&lt;li&gt;Where is the service incorporated, and what legal jurisdiction governs its obligations?&lt;/li&gt;
&lt;li&gt;Does the service's threat model documentation address government compulsion?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The EU Chat Control debate is not isolated. Similar proposals have been advanced in the UK (the Online Safety Act), the US (the EARN IT Act), and Australia. The argument that "responsible encryption" can accommodate lawful access without compromising security for everyone is the same in each context. The cryptographic response is also the same: a backdoor for law enforcement is a backdoor for anyone who discovers it. The math does not change depending on who is asking for the key.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/eu-chat-control/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>Code Signing and Sigstore: How Software Supply Chain Integrity Works</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Mon, 11 May 2026 08:17:44 +0000</pubDate>
      <link>https://dev.to/havenmessenger/code-signing-and-sigstore-how-software-supply-chain-integrity-works-4pm9</link>
      <guid>https://dev.to/havenmessenger/code-signing-and-sigstore-how-software-supply-chain-integrity-works-4pm9</guid>
      <description>&lt;p&gt;The SolarWinds attack compromised roughly 18,000 organizations by inserting malicious code into a software update that was then cryptographically signed by SolarWinds' own build system. The signature was valid. The software was malicious. This is the supply chain problem: code signing proves the software came from a particular key, but it doesn't prove the software is what users think it is. Sigstore is an attempt to fix the architecture, not just the key management.&lt;/p&gt;

&lt;p&gt;Code signing has been a feature of software distribution for decades. Apple requires signed apps for distribution through the App Store and enforces notarization for macOS software outside it. Windows displays SmartScreen warnings for unsigned executables. Linux distributions cryptographically sign packages and verify signatures at install time. The mechanism is well-understood: a developer signs a hash of the software artifact with their private key; users verify the signature with the corresponding public key; a valid signature proves the artifact hasn't been modified since it was signed.&lt;/p&gt;

&lt;p&gt;What code signing doesn't prove is &lt;em&gt;when&lt;/em&gt; the signing happened, &lt;em&gt;under what conditions&lt;/em&gt; the signing key was used, or whether the signed artifact is the one that was built from the source code users can read. These gaps are where supply chain attacks live.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Traditional Code Signing Model and Its Limits
&lt;/h2&gt;

&lt;p&gt;Classical code signing relies on a private key held by the developer or organization. Security properties are only as good as the key management:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Key compromise.&lt;/strong&gt; If an attacker gains access to the signing key, they can sign arbitrary malicious software that will appear authentic. This happened in several high-profile cases — the Flame malware forged a Microsoft certificate by exploiting an MD5 collision vulnerability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build system compromise.&lt;/strong&gt; SolarWinds is the canonical example. The signing key was legitimate; the attacker compromised the build process upstream of signing, so valid signatures were produced for tampered artifacts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No transparency.&lt;/strong&gt; Traditional signing leaves no public audit trail. An attacker who silently abuses a compromised signing key leaves no record that the abuse occurred — defenders have no way to detect unexpected signing events.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key rotation complexity.&lt;/strong&gt; When a signing key must be rotated (due to expiration, compromise, or algorithm change), establishing trust in the new key requires distributing it through some trusted channel — which becomes a new attack surface.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;A valid cryptographic signature means the software was signed with the claimed key. It says nothing about whether that key was used legitimately, whether the signed code matches the published source, or when the signing occurred. Attackers who compromise a build pipeline inherit valid signing authority.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What Sigstore Is
&lt;/h2&gt;

&lt;p&gt;Sigstore is an open-source project (now under the OpenSSF, the Open Source Security Foundation) that rethinks the signing infrastructure rather than just the algorithms. It has three core components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cosign&lt;/strong&gt; — a tool for signing and verifying container images and other software artifacts. Integrates with CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fulcio&lt;/strong&gt; — a certificate authority that issues short-lived signing certificates tied to OpenID Connect (OIDC) identities. Instead of managing long-lived signing keys, developers authenticate via their GitHub, Google, or Microsoft identity; Fulcio issues a certificate valid for minutes, not years.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rekor&lt;/strong&gt; — an immutable, append-only transparency log (similar in concept to Certificate Transparency for TLS certificates) that records all signing events. Every signature is logged publicly; unexpected signatures on a project's artifacts are detectable by anyone watching the log.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When a developer signs with Sigstore:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;They authenticate to Fulcio using their OIDC identity (e.g., their GitHub account).&lt;/li&gt;
&lt;li&gt;Fulcio issues a short-lived certificate embedding their identity and expiring in minutes.&lt;/li&gt;
&lt;li&gt;Cosign signs the artifact with the short-lived key and submits the signature and certificate to Rekor.&lt;/li&gt;
&lt;li&gt;Rekor returns a signed inclusion proof — a cryptographic record that this entry is in the log.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The result: there are no long-lived private keys to steal or compromise. Each signing event is tied to a specific human identity (the OIDC identity) and recorded in a tamper-evident public log. If an attacker compromises a CI system and signs malicious artifacts, those signatures appear in Rekor — and a project's monitoring can detect unexpected signing events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transparency Logs: Learning from Certificate Transparency
&lt;/h2&gt;

&lt;p&gt;The conceptual model for Rekor comes from Certificate Transparency (CT), a system that became mandatory for TLS certificates after 2018. Before CT, certificate authorities could issue certificates for any domain without public record. This enabled attacks: a rogue CA could issue a certificate for google.com and use it for man-in-the-middle attacks, with no way for Google to detect it.&lt;/p&gt;

&lt;p&gt;CT requires that all publicly-trusted TLS certificates be logged in append-only, publicly auditable logs before browsers will accept them. Google, Cloudflare, and others operate CT logs. The result: certificate misissuance is now detectable. Domain owners can monitor CT logs for certificates issued to their domains.&lt;/p&gt;

&lt;p&gt;Rekor applies the same architecture to software artifact signing. The log is a Merkle tree: each entry contains a hash of the artifact, the signature, and the signing certificate. Each entry is linked to all previous entries; modifying any historical entry would require recomputing all subsequent hashes, which would be detectable. The transparency property is structural, not policy-based.&lt;/p&gt;

&lt;h2&gt;
  
  
  GPG Signing for Git Commits: Different Problem, Complementary Tool
&lt;/h2&gt;

&lt;p&gt;Sigstore addresses artifact signing in CI/CD and package distribution. Git commit signing addresses a different problem: proving that a commit attributed to a person was actually made by them.&lt;/p&gt;

&lt;p&gt;When you GPG-sign a Git commit, the signature covers the commit content and metadata. Anyone who has your public key can verify you made the commit. GitHub, GitLab, and Gitea display verified badges on signed commits. This matters for projects where the commit history is itself a security property — if attackers can forge commits attributed to maintainers, they can socially engineer merges of malicious code.&lt;/p&gt;

&lt;p&gt;The SSH signing support added to Git 2.34 simplified this: rather than managing a GPG keyring, you can sign commits with an SSH key. GitHub supports SSH signatures directly without requiring GPG.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Approach&lt;/th&gt;
&lt;th&gt;Protects&lt;/th&gt;
&lt;th&gt;Key Management&lt;/th&gt;
&lt;th&gt;Transparency&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Classical code signing&lt;/td&gt;
&lt;td&gt;Artifact integrity from known key&lt;/td&gt;
&lt;td&gt;Long-lived private key&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sigstore / Cosign&lt;/td&gt;
&lt;td&gt;Artifact integrity + signing identity + audit trail&lt;/td&gt;
&lt;td&gt;Short-lived, OIDC-backed; no persistent private key&lt;/td&gt;
&lt;td&gt;Public log (Rekor)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPG commit signing&lt;/td&gt;
&lt;td&gt;Commit authorship attribution&lt;/td&gt;
&lt;td&gt;Long-lived GPG key&lt;/td&gt;
&lt;td&gt;Via WoT / keyservers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reproducible builds&lt;/td&gt;
&lt;td&gt;Binary matches published source&lt;/td&gt;
&lt;td&gt;N/A (multiple independent verifiers)&lt;/td&gt;
&lt;td&gt;Independent reproduction&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Reproducible Builds: Signing Isn't Enough
&lt;/h2&gt;

&lt;p&gt;Even perfect code signing doesn't answer one question: does this signed binary actually correspond to the published source code? A developer could sign a binary built from source code they haven't published. A malicious insider could modify the build process without changing the source repository.&lt;/p&gt;

&lt;p&gt;Reproducible builds address this orthogonal problem. When a build is reproducible, independently following the documented build process from the same source produces a bit-for-bit identical binary. Multiple parties can verify the build; agreement among independent verifiers provides evidence that the binary matches the source. Debian, Bitcoin Core, and Tor Browser have extensive reproducible build infrastructure.&lt;/p&gt;

&lt;p&gt;The combination of Sigstore (who signed it, when, logged publicly) plus reproducible builds (the binary matches the source) provides defense in depth that no single mechanism offers alone. This is the architecture serious security-sensitive open source projects are moving toward.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Users
&lt;/h2&gt;

&lt;p&gt;For most software users, these mechanisms operate invisibly — package managers, app stores, and update systems handle verification automatically. But understanding the architecture matters for two reasons.&lt;/p&gt;

&lt;p&gt;First, it clarifies what to look for when evaluating security-sensitive software. Projects that publish signed releases with detached signatures, maintain entries in Rekor, and have reproducible build infrastructure have made concrete investments in supply chain integrity. Projects that distribute unsigned binaries or rely on HTTPS-only distribution have not.&lt;/p&gt;

&lt;p&gt;Second, it frames the correct threat model. The compromise vectors that have worked against major organizations — SolarWinds, XZ Utils (the 2024 backdoor nearly merged into most Linux distributions) — target the build and distribution pipeline, not the cryptographic algorithms. The question isn't "can I verify the SHA256?" but "can I verify what was actually built and signed, by whom, and when?"&lt;/p&gt;

&lt;p&gt;For users of privacy-sensitive applications specifically, this matters: an application that encrypts your communications provides no protection if a malicious update can be delivered silently through a compromised build pipeline. Signed releases, logged in a transparency ledger, built reproducibly from audited source — that's the architecture that closes the gap.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/code-signing-sigstore-explained/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>Matrix: The Open Protocol for Federated Encrypted Messaging</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Sun, 10 May 2026 08:17:59 +0000</pubDate>
      <link>https://dev.to/havenmessenger/matrix-the-open-protocol-for-federated-encrypted-messaging-2c88</link>
      <guid>https://dev.to/havenmessenger/matrix-the-open-protocol-for-federated-encrypted-messaging-2c88</guid>
      <description>&lt;p&gt;Signal works well when everyone involved trusts the same company. Matrix is built for the case where they don't — where organizations want to run their own servers, where communities need to control their own infrastructure, and where interoperability across organizations matters more than simplicity.&lt;/p&gt;

&lt;p&gt;Matrix is an open standard for real-time communication. Not an app, not a company's product — a protocol specification, maintained by the Matrix.org Foundation, that anyone can implement. The most prominent client is Element (formerly Riot), but the protocol supports a wide range of clients and server implementations. Understanding Matrix requires understanding why federation was the core design goal, and what that goal costs in complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Federation: What It Means and Why It Matters
&lt;/h2&gt;

&lt;p&gt;Federated protocols allow servers run by different organizations to communicate with each other. Email is federated: a Gmail account can exchange messages with a Fastmail account and a self-hosted Postfix server. No single company owns email. Matrix works the same way for instant messaging: a user on matrix.org can be in the same room as a user on a university's homeserver, a company's self-hosted Synapse instance, and a self-hosted instance run by an individual.&lt;/p&gt;

&lt;p&gt;The alternative — what Signal, WhatsApp, iMessage, and Telegram use — is a centralized model: all users register with one company, all messages route through that company's servers. This is simpler to operate and easier to reason about cryptographically, but it creates a single point of control. The company can be compelled by courts, can unilaterally change terms of service, can go bankrupt or be acquired, or can simply decide to block users in certain jurisdictions.&lt;/p&gt;

&lt;p&gt;Federation distributes that control. No single party can shut down the Matrix network any more than a single party can shut down email. An organization that wants guaranteed access to its own communications can run its own homeserver and not depend on any third party remaining operational or cooperative.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Matrix Identity Model
&lt;/h2&gt;

&lt;p&gt;Every Matrix user has a Matrix ID of the form &lt;code&gt;@username:homeserver.tld&lt;/code&gt; — analogous to an email address. The homeserver portion identifies which server manages that account's identity and stores their messages. A user on matrix.org has IDs like &lt;code&gt;@alice:matrix.org&lt;/code&gt;; a user on a self-hosted server at corp.example.com has IDs like &lt;code&gt;@bob:corp.example.com&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Rooms in Matrix are identified by room IDs (internal) and room aliases (human-readable, like &lt;code&gt;#general:example.com&lt;/code&gt;). When a room has participants from multiple homeservers, each homeserver stores a copy of the room's event history. This replication is what makes federation work without a single authoritative server — but it also means the room history is stored in multiple places, and deleting a message requires cooperation from all participating homeservers.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Room State Graph
&lt;/h2&gt;

&lt;p&gt;Matrix's data model is unusual: a room is represented as a Directed Acyclic Graph (DAG) of events. Each message, membership change, or state update is an event that cryptographically references the events that preceded it. This structure is what allows multiple homeservers to replicate and merge room state without a central sequencer — servers can receive events out of order, detect forks (when two servers each received different events claiming the same "previous event"), and resolve them deterministically.&lt;/p&gt;

&lt;p&gt;The state resolution algorithm is one of the more complex parts of the Matrix specification. The current version (State Resolution v2) is designed to handle Byzantine conditions — scenarios where a homeserver is misbehaving or partition-healing after a network split — without allowing any single server to roll back the room's history or exclude valid events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Encryption in Matrix: Olm and Megolm
&lt;/h2&gt;

&lt;p&gt;Matrix's end-to-end encryption uses two related but distinct cryptographic ratchets, both implemented in the libolm library (and its newer Rust successor, vodozemac):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Olm&lt;/strong&gt; is used for 1:1 encrypted sessions between devices. It implements a Double Ratchet construction (the same design underlying the Signal Protocol), providing forward secrecy and break-in recovery. Each device-to-device session has its own Olm session; if one session key is compromised, past messages in that session retain forward secrecy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Megolm&lt;/strong&gt; is used for group rooms. It uses a single ratchet per room per device rather than per-message device-to-device sessions. This is a deliberate performance trade-off: in a 200-person room, encrypting each message with 199 separate Olm sessions would be prohibitively expensive. Instead, a sender generates a Megolm outbound session, distributes the current ratchet state to all room members (via Olm-encrypted to-device messages), and then encrypts messages using the shared Megolm session.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Megolm vs. Per-Message Ratchet: The Forward Secrecy Trade-Off&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Megolm's single ratchet per session provides forward secrecy within that session — messages encrypted at ratchet position N can't be decrypted using a key derived after position N. But since all room members received the same initial session key, a device that obtained the starting Megolm key can decrypt everything encrypted in that session. This is structurally different from Signal's per-message Double Ratchet, which provides stronger compromise recovery guarantees. It's a deliberate trade: group performance over maximal forward secrecy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Key verification in Matrix works via cross-signing: each user has a master key, a user-signing key, and a self-signing key. Verifying another user means verifying their master key, typically via a QR code scan or emoji comparison during an interactive verification session. Once verified, that user's devices are trusted for encrypted communication without per-device verification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Matrix vs. Signal: Different Problems, Different Solutions
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Matrix&lt;/th&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;XMPP + OMEMO&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Federation&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✓ Core design&lt;/td&gt;
&lt;td&gt;✗ Centralized&lt;/td&gt;
&lt;td&gt;✓ Core design&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;E2EE by default&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~ DMs yes; rooms opt-in&lt;/td&gt;
&lt;td&gt;✓ Always&lt;/td&gt;
&lt;td&gt;~ Client-dependent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Encryption protocol&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Olm/Megolm (Double Ratchet-based)&lt;/td&gt;
&lt;td&gt;Signal Protocol&lt;/td&gt;
&lt;td&gt;OMEMO (Signal Protocol)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Forward secrecy (groups)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~ Per-session (Megolm)&lt;/td&gt;
&lt;td&gt;✓ Per-message (Sender Keys)&lt;/td&gt;
&lt;td&gt;~ Per-session&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Phone number required&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✗ No&lt;/td&gt;
&lt;td&gt;✓ Yes&lt;/td&gt;
&lt;td&gt;✗ No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bridges to other protocols&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✓ Extensive&lt;/td&gt;
&lt;td&gt;✗ None&lt;/td&gt;
&lt;td&gt;~ Limited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self-hosting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;✓ Designed for it&lt;/td&gt;
&lt;td&gt;✗ Not supported&lt;/td&gt;
&lt;td&gt;✓ Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Signal's centralized model allows it to make strong, consistent guarantees about protocol behavior — there's one reference implementation and one server, so the encryption invariants are easier to audit and enforce. Matrix's federated model is more complex: the security of a conversation depends on the behavior of all participating homeservers, and a compromised or malicious homeserver can observe unencrypted room events (in non-E2EE rooms) or metadata.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bridges: Matrix as a Universal Messaging Hub
&lt;/h2&gt;

&lt;p&gt;One of Matrix's most practically useful features is its bridge ecosystem. A bridge is a bot-like process that translates between Matrix's protocol and another messaging platform's API. Bridges exist for Telegram, WhatsApp, Discord, Slack, IRC, Signal, SMS, and many others. The result is that a Matrix homeserver can act as a hub: messages sent from Discord arrive in a Matrix room, replies from Matrix users are forwarded back to Discord.&lt;/p&gt;

&lt;p&gt;The security implications of bridging are real and worth stating plainly: bridged messages are not end-to-end encrypted between the two platforms. A message from Matrix to a WhatsApp bridge decrypts on the bridge server before being transmitted to WhatsApp's infrastructure. Bridges are useful for consolidation and convenience; they don't preserve the encryption guarantees of either platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Honest Trade-Offs
&lt;/h2&gt;

&lt;p&gt;Matrix solves the federated, decentralized, self-hostable encrypted messaging problem better than any other protocol at production scale. It also carries costs that Signal doesn't:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Operational complexity&lt;/strong&gt; — running a Synapse homeserver requires maintenance, storage planning, and attention to upgrades&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata exposure to homeservers&lt;/strong&gt; — your homeserver admin can see who you talk to, when, and in which rooms (for non-E2EE rooms, the content too)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key management complexity&lt;/strong&gt; — cross-signing verification, key backup configuration, and device trust management are non-trivial for non-technical users&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Federation amplifies bugs&lt;/strong&gt; — a state resolution bug on one homeserver can affect all rooms that server participates in&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Matrix is the right choice for organizations that need to own their infrastructure, for communities that need governance independent of any single company, and for technical users who want federation over simplicity. For individuals who want maximum encryption assurance with minimal configuration, Signal's simpler model is probably the better fit — as we discussed in our post on the &lt;a href="https://dev.to/blog/posts/signal-phone-number-problem/"&gt;trade-offs in Signal's design&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The two protocols aren't competing for the same use case. They're solving different versions of the secure messaging problem, and knowing which version you have determines which tool fits.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/matrix-protocol-explained/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>Cold Boot Attacks: Why Disk Encryption Doesn't Protect a Running Computer</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Sat, 09 May 2026 08:19:11 +0000</pubDate>
      <link>https://dev.to/havenmessenger/cold-boot-attacks-why-disk-encryption-doesnt-protect-a-running-computer-4hoe</link>
      <guid>https://dev.to/havenmessenger/cold-boot-attacks-why-disk-encryption-doesnt-protect-a-running-computer-4hoe</guid>
      <description>&lt;p&gt;Cold boot attacks expose a gap between what disk encryption promises and what it delivers on a running computer. This post explains the attack mechanically, who it realistically affects, and which mitigations work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2008 Princeton Paper
&lt;/h2&gt;

&lt;p&gt;In 2008, a team of researchers from Princeton, the EFF, and Wind River Systems published "Lest We Remember: Cold Boot Attacks on Encryption Keys." They demonstrated that DRAM (dynamic random-access memory) retains its contents for seconds to minutes after power is removed — sometimes longer when cooled. By cutting power to a running machine, chilling the RAM modules, and booting from a custom USB tool, they dumped full RAM contents including the AES keys BitLocker, FileVault, and dm-crypt had been using to protect encrypted disks.&lt;/p&gt;

&lt;p&gt;The fundamental physics has not changed: DRAM cells are capacitors that lose charge over time, but "over time" can mean seconds at room temperature or minutes when chilled with compressed air or liquid nitrogen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The attack targets RAM, not the encrypted disk. It does not break AES. It recovers the key that was already decrypted and loaded into memory so the OS could do its job.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Attack Works
&lt;/h2&gt;

&lt;p&gt;A cold boot attack requires physical access to the target device. The procedure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access the running or recently-running machine.&lt;/strong&gt; The device must be on, in sleep mode, or recently powered off. A machine fully off for many minutes is generally safe.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimize data decay.&lt;/strong&gt; Cut power suddenly — not a graceful shutdown, which triggers OS memory-wiping routines — and apply cold to the RAM with inverted compressed air, dropping surface temperature below freezing to extend retention.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transfer the RAM.&lt;/strong&gt; On desktops, remove the DIMMs while cold and install them in an attacker-controlled machine. On laptops with soldered RAM, boot from a USB drive on the same machine while RAM is still cold.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dump and analyze.&lt;/strong&gt; A forensic boot tool captures the full RAM image. Automated tools scan for known AES key schedules and can recover keys even from partially decayed images using error-correction algorithms.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Who Is Actually at Risk
&lt;/h2&gt;

&lt;p&gt;Cold boot attacks require physical access, specialized equipment, and technical expertise. This is not a mass-targeting technique. It is suited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Law enforcement and border agencies with physical custody of a device that was running at seizure&lt;/li&gt;
&lt;li&gt;Corporate espionage targeting executives or researchers whose devices might be briefly accessible&lt;/li&gt;
&lt;li&gt;Nation-state intelligence operations against specific high-value targets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most people, the threat model does not include physical access by a sophisticated attacker with RAM forensics capability. If your concern is targeted physical access — a journalist at a border crossing with sensitive source material — it is worth thinking about carefully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Sleep Mode Is Particularly Dangerous
&lt;/h2&gt;

&lt;p&gt;When a laptop sleeps (suspend-to-RAM) rather than hibernating or fully shutting down, the encryption keys remain loaded in RAM. The disk stays encrypted; the key to decrypt it sits in DRAM, held in place by a trickle of power. The lock screen does not flush disk encryption keys from RAM.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A laptop closed and sleeping in a conference room is not in the same security state as a laptop that has been shut down. In sleep mode the keys are in volatile RAM with a trickle of power; in shutdown they are gone.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Modern Mitigations and Their Limits
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mitigation&lt;/th&gt;
&lt;th&gt;How It Works&lt;/th&gt;
&lt;th&gt;Limits&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Memory overwrite on shutdown&lt;/td&gt;
&lt;td&gt;OS zeros RAM during normal shutdown&lt;/td&gt;
&lt;td&gt;Only helps if attacker cannot cut power before shutdown completes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hibernate instead of sleep&lt;/td&gt;
&lt;td&gt;Encrypted disk image replaces RAM contents&lt;/td&gt;
&lt;td&gt;Slower wake; hibernate image is a separate attack surface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pre-boot PIN (BitLocker, LUKS)&lt;/td&gt;
&lt;td&gt;TPM will not release key without PIN&lt;/td&gt;
&lt;td&gt;Does not help against attack on a running or sleeping machine&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hardware memory encryption (AMD SME/SEV, Intel MKTME)&lt;/td&gt;
&lt;td&gt;CPU encrypts DRAM with a key held in the CPU&lt;/td&gt;
&lt;td&gt;Keys may still be in CPU cache; evolving attack surface&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Soldered/non-removable RAM&lt;/td&gt;
&lt;td&gt;Cannot transfer DIMMs to attacker machine&lt;/td&gt;
&lt;td&gt;Attacker can still cold-boot from USB on original hardware&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;AMD Secure Memory Encryption (SME) and Intel Multi-Key Total Memory Encryption (MKTME) are the most promising hardware-level mitigations. When enabled, the CPU transparently encrypts DRAM contents using a key held in the CPU — never exposed to the memory bus. A RAM dump from such a machine yields ciphertext, not key material.&lt;/p&gt;

&lt;p&gt;Apple Silicon (M-series) uses a unified memory architecture where CPU, GPU, and Neural Engine share the same physical package. Traditional DIMM-removal attacks are impossible, and cold boot via USB is complicated by Apple Secure Boot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Recommendations
&lt;/h2&gt;

&lt;p&gt;For people with elevated risk profiles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shut down completely&lt;/strong&gt; rather than sleeping or hibernating when leaving a device unattended in adversarial environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use hardware with memory encryption&lt;/strong&gt; — AMD with SME enabled, or Apple Silicon.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable pre-boot authentication&lt;/strong&gt; (TPM + PIN, not TPM-only auto-unlock).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider a travel device&lt;/strong&gt; with minimal sensitive data, wiped before and after high-risk travel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Full-disk encryption remains essential and effective against a powered-off device. Cold boot attacks have a different threat boundary — the running state, not the powered-off state. Know which threat you are defending against.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/cold-boot-attacks-ram-forensics/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>Post-Quantum Cryptography: What Happens to Your Encrypted Data When Quantum Arrives</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Fri, 08 May 2026 08:18:47 +0000</pubDate>
      <link>https://dev.to/havenmessenger/post-quantum-cryptography-what-happens-to-your-encrypted-data-when-quantum-arrives-5bo4</link>
      <guid>https://dev.to/havenmessenger/post-quantum-cryptography-what-happens-to-your-encrypted-data-when-quantum-arrives-5bo4</guid>
      <description>&lt;p&gt;Cryptographers are engaged in a race against a computer that doesn't fully exist yet. Quantum computers will, when sufficiently large and reliable, break the public-key cryptography that secures most internet communications today — HTTPS, SSH, PGP, Signal, and more. NIST finalized post-quantum replacement standards in August 2024. The migration has started. Here's what's actually at stake and where things stand.&lt;/p&gt;

&lt;p&gt;The security of RSA and elliptic-curve cryptography (ECC) rests on mathematical problems that are computationally hard on classical computers. RSA relies on the difficulty of factoring large integers. ECC relies on the hardness of the elliptic-curve discrete logarithm problem. Both problems are efficiently solvable by a quantum computer running Shor's algorithm — a quantum algorithm published by Peter Shor in 1994.&lt;/p&gt;

&lt;p&gt;"Efficiently solvable" means polynomial time rather than exponential time. A 2048-bit RSA key that would take classical computers longer than the age of the universe to factor could theoretically be broken by a sufficiently large quantum computer in hours. The word "theoretically" is doing heavy lifting here — the quantum computer required doesn't exist yet — but the mathematical vulnerability is not in dispute.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Actually Vulnerable
&lt;/h2&gt;

&lt;p&gt;Not all cryptography is equally affected by quantum computing. The threat splits cleanly:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Algorithm&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Quantum Threat&lt;/th&gt;
&lt;th&gt;Replacement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;RSA-2048/4096&lt;/td&gt;
&lt;td&gt;Key exchange, signatures&lt;/td&gt;
&lt;td&gt;Broken by Shor's&lt;/td&gt;
&lt;td&gt;ML-KEM (Kyber)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ECDH / X25519&lt;/td&gt;
&lt;td&gt;Key exchange&lt;/td&gt;
&lt;td&gt;Broken by Shor's&lt;/td&gt;
&lt;td&gt;ML-KEM (Kyber)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ECDSA / Ed25519&lt;/td&gt;
&lt;td&gt;Digital signatures&lt;/td&gt;
&lt;td&gt;Broken by Shor's&lt;/td&gt;
&lt;td&gt;ML-DSA (Dilithium)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AES-256&lt;/td&gt;
&lt;td&gt;Symmetric encryption&lt;/td&gt;
&lt;td&gt;Weakened by Grover's (128-bit effective)&lt;/td&gt;
&lt;td&gt;Keep using AES-256&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SHA-256 / SHA-3&lt;/td&gt;
&lt;td&gt;Hashing&lt;/td&gt;
&lt;td&gt;Weakened by Grover's (128-bit effective)&lt;/td&gt;
&lt;td&gt;Keep using SHA-256+&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key insight: &lt;strong&gt;symmetric cryptography is weakened but not broken by quantum computing.&lt;/strong&gt; Grover's algorithm halves the effective key length of symmetric ciphers. AES-256 becomes effectively AES-128 against a quantum adversary — still computationally expensive to break, not broken in principle. The algorithms that are actually broken are the asymmetric ones: the key exchange and signature algorithms used in TLS handshakes, SSH key exchange, PGP encryption, and the initial key agreement in messaging protocols.&lt;/p&gt;

&lt;h2&gt;
  
  
  Harvest Now, Decrypt Later: The Immediate Threat
&lt;/h2&gt;

&lt;p&gt;The practical concern isn't waiting for a quantum computer to arrive before worrying. State actors almost certainly began collecting encrypted internet traffic years ago under the assumption that they'll be able to decrypt it retroactively once a capable quantum computer becomes available. This strategy is called "harvest now, decrypt later" — and it's the reason the post-quantum migration matters even before quantum computers can break encryption in real time.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Why This Matters Now:&lt;/strong&gt; Any data encrypted today using RSA or ECC key exchange — classified government communications, medical records, financial transactions, journalist-source communications — may already be in adversarial hands, stored for future decryption. The relevant question is not "when will quantum computers arrive?" but "how long does my data need to remain confidential?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Perfect forward secrecy helps mitigate this in theory — ephemeral session keys mean captured traffic can't be decrypted even if a long-term key is later compromised. But forward secrecy doesn't help if the ephemeral key exchange itself (using ECDH) is broken by a quantum computer. The traffic captured today can be retroactively decrypted once the ephemeral keys are reconstructed via Shor's algorithm.&lt;/p&gt;

&lt;h2&gt;
  
  
  NIST's Post-Quantum Standards
&lt;/h2&gt;

&lt;p&gt;The National Institute of Standards and Technology ran a multi-year post-quantum cryptography competition starting in 2016. After multiple rounds of evaluation and cryptanalysis by the global research community, NIST finalized three primary standards in August 2024:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ML-KEM (FIPS 203)&lt;/strong&gt; — Module-Lattice Key Encapsulation Mechanism, based on CRYSTALS-Kyber. Replaces RSA and ECDH for key encapsulation/exchange. Based on the hardness of the Module Learning With Errors (MLWE) problem.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ML-DSA (FIPS 204)&lt;/strong&gt; — Module-Lattice Digital Signature Algorithm, based on CRYSTALS-Dilithium. Replaces ECDSA/Ed25519 for digital signatures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SLH-DSA (FIPS 205)&lt;/strong&gt; — Stateless Hash-Based Digital Signature Algorithm, based on SPHINCS+. A hash-based signature scheme that provides a mathematically independent backup to the lattice-based ML-DSA — its security rests on the security of hash functions rather than lattice problems.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These algorithms are based on different mathematical structures than RSA/ECC — primarily lattice problems and hash functions — which are believed to be resistant to both classical and quantum attacks. This is why hybrid schemes (combining classical and post-quantum algorithms) are currently preferred — you need both to be broken to break the hybrid.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Has Already Migrated
&lt;/h2&gt;

&lt;p&gt;Migration is happening faster than most public discussion acknowledges.&lt;/p&gt;

&lt;p&gt;Signal deployed PQXDH (Post-Quantum Extended Diffie-Hellman) in September 2023, combining X25519 with CRYSTALS-Kyber for key agreement. A session protected by PQXDH requires an adversary to break both the classical ECDH exchange and the post-quantum ML-KEM exchange — providing security against harvest-now-decrypt-later attacks even if one algorithm is eventually broken.&lt;/p&gt;

&lt;p&gt;Apple deployed PQ3 in iMessage in February 2024, also using a hybrid scheme combining ECC with Kyber. iMessage's PQ3 provides post-quantum security for initial key establishment and ongoing rekeying, so that even if device keys are compromised in the future, the conversation history remains protected.&lt;/p&gt;

&lt;p&gt;Major browsers (Chrome and Firefox) added support for hybrid X25519+ML-KEM key exchange in TLS 1.3 during 2023-2024. Cloudflare and other major TLS intermediaries began offering post-quantum TLS connections. OpenSSH 9.0 (released April 2022) added ML-KEM-based key exchange.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Timeline Question
&lt;/h2&gt;

&lt;p&gt;A Cryptographically Relevant Quantum Computer (CRQC) — one large and stable enough to run Shor's algorithm against RSA-2048 — requires on the order of 4,000 logical qubits. Due to quantum error correction requirements, logical qubits require hundreds to thousands of physical qubits each. Current quantum computers (as of mid-2026) operate in the thousands of noisy physical qubits range and are far from the millions of stable physical qubits needed.&lt;/p&gt;

&lt;p&gt;Estimates for when a CRQC might exist range from 5 years (optimistic) to 20+ years (common among independent cryptographers) to "never with current architectural approaches." The honest answer is that no one knows. What is known is that the harvest-now-decrypt-later threat is real regardless of timeline, and that migration takes years to complete across global infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You Should Do Today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Keep software updated.&lt;/strong&gt; Post-quantum upgrades are being shipped silently in operating systems, browsers, and apps. Staying current is the primary action available to most users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prefer apps that have announced PQC migration.&lt;/strong&gt; Signal's PQXDH and Apple's PQ3 are publicly documented. Others are in progress.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For long-lived sensitive documents,&lt;/strong&gt; consider additional symmetric encryption with AES-256. Symmetric encryption is quantum-resistant at 256-bit key lengths.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Evaluate your data's required confidentiality horizon.&lt;/strong&gt; Information that needs to stay secret for 20+ years faces a different risk profile than information relevant for 2 years.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't panic about current communications.&lt;/strong&gt; The infrastructure migration is underway, and for most users, the most important action is simply running up-to-date software from providers who are actively migrating.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cryptographic transition underway is the largest infrastructure-level security migration since the introduction of TLS itself. It's happening gradually and mostly invisibly — which is exactly how large infrastructure migrations should happen. The threat is real, the solutions are standardized, and the work is in progress.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/post-quantum-cryptography/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>Reproducible Builds: The Only Way to Verify Your Software Wasn't Tampered With</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Thu, 07 May 2026 08:20:41 +0000</pubDate>
      <link>https://dev.to/havenmessenger/reproducible-builds-the-only-way-to-verify-your-software-wasnt-tampered-with-31h</link>
      <guid>https://dev.to/havenmessenger/reproducible-builds-the-only-way-to-verify-your-software-wasnt-tampered-with-31h</guid>
      <description>&lt;p&gt;When a privacy app publishes its source code, many users assume that's sufficient to trust the binary they download. It isn't. The gap between source code and running software is a build pipeline — and that pipeline is exactly where sophisticated attackers insert themselves. Reproducible builds close that gap.&lt;/p&gt;

&lt;p&gt;Open source software has a trust problem that open source alone doesn't solve. You can publish every line of code on GitHub and still distribute a binary that contains code nobody reviewed. The build system — the servers, scripts, compilers, and toolchains that turn source into executable — sits between the audited code and the running program.&lt;/p&gt;

&lt;p&gt;Reproducible builds are a technique that lets anyone independently verify a binary was compiled from specific source code, without trusting the developer's build infrastructure. The property is simple to state and surprisingly difficult to achieve: given the same source code and build instructions, any two independent builds produce byte-for-byte identical output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Build Pipelines Are Attack Targets
&lt;/h2&gt;

&lt;p&gt;Ken Thompson's 1984 Turing Award lecture, "Reflections on Trusting Trust," articulated the fundamental problem: a compiler can be modified to insert backdoors into any program it compiles, including its own source. The attack is self-perpetuating and invisible in the source code. Thompson called this the "trusting trust" problem and concluded: "You can't trust code that you did not totally create yourself."&lt;/p&gt;

&lt;p&gt;Modern supply chain attacks follow the same logic. The 2020 SolarWinds compromise inserted malicious code into the Orion build system — the distributed binaries contained backdoors that weren't in the source repository. The Xcode Ghost incident in 2015 involved a modified version of Apple's development toolchain distributed via unofficial channels; apps compiled with it contained hidden functionality. In neither case would auditing the source code have caught anything.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The trust problem&lt;/strong&gt;: Open source proves that reviewed code exists. It does not prove the binary you downloaded was compiled from that code. A developer's build server, CI/CD pipeline, or signing key could all be compromised without any change to the publicly visible source repository. Reproducible builds let you check.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For privacy software specifically, this is a high-value target. An attacker who compromises the build pipeline of a secure messenger can distribute a backdoored version to millions of users while every public code audit comes back clean.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes a Build Non-Reproducible
&lt;/h2&gt;

&lt;p&gt;Most software builds are non-reproducible not because of malice, but because developers don't think about it. Common sources of non-determinism:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Timestamps&lt;/strong&gt; — compilers embed file modification times or build timestamps into binaries by default&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File system ordering&lt;/strong&gt; — directory listings are not ordered consistently across systems; build tools that iterate over files produce different object ordering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Locale and timezone&lt;/strong&gt; — string sorting and date formatting vary by system locale&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Random seeds&lt;/strong&gt; — some compilers randomize symbol ordering for security (ASLR) without preserving the seed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Toolchain version differences&lt;/strong&gt; — different compiler versions produce different output even from identical source&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embedded paths&lt;/strong&gt; — the absolute path of the build directory is often embedded in debug symbols&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Achieving reproducibility requires explicitly controlling or eliminating all of these: fixing timestamps (often to the date of the last commit), using deterministic file ordering, specifying exact toolchain versions, and stripping or normalizing embedded paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who Achieves It
&lt;/h2&gt;

&lt;p&gt;The Tor Browser is the most prominent example in the privacy space. The Tor Project has shipped reproducible builds since 2013. Anyone can download the Tor Browser source, follow the published build instructions using the specified toolchain, and verify that the resulting binary matches what the Tor Project distributes — down to the SHA-256 hash. This is a meaningful security property: it means you can verify the binary without trusting Tor Project's build infrastructure.&lt;/p&gt;

&lt;p&gt;Bitcoin Core has supported reproducible builds since 2019 via Guix-based bootstrapping. The project uses GNU Guix to build from a minimal, audited set of binary seeds, then bootstraps the entire toolchain from source. The result is verifiable at every stage. For software that controls financial assets, this level of rigor is warranted.&lt;/p&gt;

&lt;p&gt;Debian Linux has been working toward reproducible builds since 2015. As of 2026, the vast majority of packages in Debian's archive are reproducible. The project maintains a public tracker at reproducible-builds.org that shows which packages pass and which don't.&lt;/p&gt;

&lt;p&gt;Signal does not currently ship reproducible builds for its mobile apps. This is a known gap that the Signal community has raised. The Android APK can be rebuilt and compared, but the build environment reproducibility is not fully guaranteed. F-Droid, an alternative Android app store focused on free and open source software, builds apps from source in a clean environment and signs them independently of the original developers — a partial mitigation that shifts trust from the developer's pipeline to F-Droid's.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Verify in Practice
&lt;/h2&gt;

&lt;p&gt;For software that supports reproducible builds, verification follows a consistent pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Obtain the source code at the exact tagged version of the release you're verifying&lt;/li&gt;
&lt;li&gt;Install the exact toolchain version specified in the build documentation&lt;/li&gt;
&lt;li&gt;Run the build using the provided build script or container&lt;/li&gt;
&lt;li&gt;Compare the SHA-256 (or SHA-512) hash of your output against the developer's published hash&lt;/li&gt;
&lt;li&gt;Optionally: verify the developer's published hash is signed by a key you've verified via the web of trust or key transparency&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;The process is computationally intensive — rebuilding the Tor Browser or Bitcoin Core takes significant time and disk space. Most users won't do this. The value is that someone can, and that a large community of independent builders checking each release creates a strong deterrent against tampering.&lt;br&gt;
— Reproducible Builds project documentation&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Reproducibility and Key Transparency
&lt;/h2&gt;

&lt;p&gt;Reproducible builds answer one question: does this binary match this source? They don't answer: did everyone get the same binary? A compromised distribution server could serve different binaries to different users — a targeted attack that reproducible builds alone don't detect.&lt;/p&gt;

&lt;p&gt;This is where key transparency and binary transparency come in. Systems like binary transparency logs (analogous to Certificate Transparency for TLS) maintain an append-only, publicly auditable log of every binary release. Clients can verify that the binary they received is included in the log, and that the log is consistent across all observers.&lt;/p&gt;

&lt;p&gt;The combination of reproducible builds (anyone can verify source-to-binary integrity) and binary transparency (anyone can verify they received the same binary as everyone else) closes the loop on software supply chain verification. Both properties together are necessary; neither is sufficient alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Status Across Privacy Software
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Software&lt;/th&gt;
&lt;th&gt;Reproducible Builds&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tor Browser&lt;/td&gt;
&lt;td&gt;✓ Yes&lt;/td&gt;
&lt;td&gt;Since 2013; full instructions published&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bitcoin Core&lt;/td&gt;
&lt;td&gt;✓ Yes (Guix)&lt;/td&gt;
&lt;td&gt;Bootstrapped from minimal binary seeds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debian packages&lt;/td&gt;
&lt;td&gt;✓ Most&lt;/td&gt;
&lt;td&gt;reproducible-builds.org tracks status per package&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Signal Android&lt;/td&gt;
&lt;td&gt;~ Partial&lt;/td&gt;
&lt;td&gt;APK can be rebuilt; full environment reproducibility not guaranteed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Signal iOS&lt;/td&gt;
&lt;td&gt;✗ No&lt;/td&gt;
&lt;td&gt;Apple's toolchain prevents full reproducibility&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;F-Droid apps&lt;/td&gt;
&lt;td&gt;✓ F-Droid-built&lt;/td&gt;
&lt;td&gt;Built independently from source; shifts trust to F-Droid&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Most commercial apps&lt;/td&gt;
&lt;td&gt;✗ No&lt;/td&gt;
&lt;td&gt;Binary-only distribution; no way to verify&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Standard Worth Demanding
&lt;/h2&gt;

&lt;p&gt;For any software that handles your private communications, financial assets, or encryption keys, reproducible builds should be a minimum expectation — not an advanced feature. The Tor Project and Bitcoin Core demonstrate it's achievable for real-world, complex software. The absence of reproducible builds doesn't make software untrustworthy, but it does mean you're placing unverifiable trust in a build pipeline you can't inspect.&lt;/p&gt;

&lt;p&gt;For privacy-conscious users evaluating tools, asking "are the builds reproducible?" is a legitimate and important question. For developers of privacy software, achieving it is worth the engineering investment. The Reproducible Builds project maintains tooling, documentation, and a community that makes the path clearer than it was a decade ago.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/reproducible-builds-explained/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>The PGP Web of Trust: Why Key Verification Is Harder Than It Looks</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Wed, 06 May 2026 08:21:07 +0000</pubDate>
      <link>https://dev.to/havenmessenger/the-pgp-web-of-trust-why-key-verification-is-harder-than-it-looks-3015</link>
      <guid>https://dev.to/havenmessenger/the-pgp-web-of-trust-why-key-verification-is-harder-than-it-looks-3015</guid>
      <description>&lt;p&gt;OpenPGP's web of trust was one of the most ambitious ideas in the history of cryptography: a decentralized system where ordinary users could vouch for each other's keys without any central authority. Phil Zimmermann built it into PGP in the early 1990s, and it mostly didn't work. Understanding why gets at something fundamental about trust, coordination, and the gap between elegant cryptography and real human behavior.&lt;/p&gt;

&lt;p&gt;The key verification problem is this: when you receive a message claiming to be from someone, how do you know the public key you used to verify it actually belongs to them, rather than to an attacker who generated a key with their name on it?&lt;/p&gt;

&lt;p&gt;This isn't a theoretical concern. Key substitution attacks — where an attacker convinces a target to use the attacker's public key instead of the intended recipient's — have been demonstrated against PGP users who didn't verify key fingerprints. The encryption is working perfectly; the problem is that it's encrypting to the wrong person.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Web of Trust Does
&lt;/h2&gt;

&lt;p&gt;PGP's solution was to make key authentication a social problem rather than a hierarchical one. If Alice wants to confirm that a key really belongs to Bob, she can look for a signature chain: does anyone Alice already trusts vouch that this key is Bob's? If Charlie, whom Alice trusts completely, has signed Bob's key after verifying his identity out-of-band, Alice can extend that trust to Bob's key.&lt;/p&gt;

&lt;p&gt;The mechanism works like this: each PGP user has a key pair. When you verify someone's identity — typically by meeting in person, checking their photo ID, and comparing the key fingerprint — you sign their public key with your private key. That signature becomes part of their key's public record on key servers. Others who trust you can then trust that key transitively.&lt;/p&gt;

&lt;p&gt;PGP defines three levels of trust you can assign to key owners:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ultimate trust:&lt;/strong&gt; Your own key. Keys you sign are trusted completely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full trust:&lt;/strong&gt; You trust this person to carefully verify others' identities before signing their keys.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Marginal trust:&lt;/strong&gt; You somewhat trust their key signing. Typically, three marginally trusted signatures are required to establish a key as valid in your keyring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The math works. Given a sufficiently dense signature graph, you can establish a path of verified trust to almost any key without needing a central authority. This was a genuine cryptographic insight in 1991, and it influenced how the broader field thought about decentralized trust for years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Web of Trust Mostly Didn't Work
&lt;/h2&gt;

&lt;p&gt;The web of trust requires two things from ordinary users that ordinary users reliably don't do: carefully verify identities before signing keys, and actively maintain their participation in the signing network.&lt;/p&gt;

&lt;p&gt;Key signing parties — formal events where participants exchange key fingerprints and sign each other's keys — were the theoretical backbone of building the web. In practice, they were attended almost exclusively by cryptography enthusiasts and open-source developers. Most people who wanted to use encrypted email never attended one, never signed anyone's key, and never got their own key signed by anyone in their contact list.&lt;/p&gt;

&lt;p&gt;The result was a web of trust that was dense in a small community of technical users and essentially nonexistent for everyone else. A journalist trying to receive tips via encrypted email, a lawyer communicating with a client, a family member trying to exchange private messages — none of them had paths of trust to each other's keys.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Web of trust failures aren't cryptographic failures. The algorithm is sound. The failure is that building trust graphs requires ongoing human coordination that doesn't happen organically at scale. Cryptographic systems that require non-cryptographers to perform careful manual verification steps will be skipped in practice.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There were also security problems within the community that did participate. In 2019, SKS keyserver operators documented a "certificate spamming" attack in which malicious actors flooded public key servers with enormous numbers of signatures on the keys of prominent developers, causing GnuPG to crash or become extremely slow when trying to process those keys. The public key infrastructure had no mechanism to prevent this. The SKS keyserver network was essentially deprecated as a result, replaced by Hagrid (keys.openpgp.org), which strips third-party signatures by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CA Alternative: Different Trade-offs, Same Fundamental Problem
&lt;/h2&gt;

&lt;p&gt;The HTTPS ecosystem solved key verification differently: with Certificate Authorities (CAs). A CA is an organization that vouches for domain ownership by issuing signed certificates. Your browser ships with a list of trusted CAs; when you connect to a website, the server presents a certificate signed by one of those CAs, and your browser trusts it.&lt;/p&gt;

&lt;p&gt;This model scales beautifully in practice — the CA infrastructure is why HTTPS deployment has become nearly universal. But it centralizes trust in the CA list, and the CA list is controlled by browser vendors and operating system developers. Rogue CA behavior has happened: in 2011, DigiNotar was compromised and issued fraudulent certificates for dozens of domains including Google. The entire CA was removed from trust lists and subsequently went bankrupt.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Scales to Regular Users&lt;/th&gt;
&lt;th&gt;Central Failure Point&lt;/th&gt;
&lt;th&gt;User Verification Required&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PGP Web of Trust&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Manual, out-of-band&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CA / PKI (HTTPS)&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;CA compromise&lt;/td&gt;
&lt;td&gt;None (automatic)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TOFU&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;td&gt;Detect changes only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Key Transparency&lt;/td&gt;
&lt;td&gt;Emerging&lt;/td&gt;
&lt;td&gt;Log server&lt;/td&gt;
&lt;td&gt;Automated (auditors)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Trust on First Use: The Pragmatic Middle Ground
&lt;/h2&gt;

&lt;p&gt;SSH made a different choice early on. When you connect to a new SSH server, the client stores the server's public key fingerprint. On subsequent connections, it checks that the key matches. If it doesn't, you get a loud warning: something changed, and you need to investigate.&lt;/p&gt;

&lt;p&gt;This is Trust on First Use (TOFU). It doesn't establish identity at first contact — it just remembers what you saw and detects if it changes. For most use cases, it's enough. An attacker trying to impersonate the server you've been connecting to for six months needs to have been in position during your very first connection. If they weren't, subsequent interception attempts trigger a warning.&lt;/p&gt;

&lt;p&gt;Signal uses a variant of TOFU. When you first communicate with a contact, their key is trusted. If it changes later — because they got a new phone, or because someone is attempting a man-in-the-middle attack — the app displays a "safety number changed" warning and allows you to verify the new number out-of-band.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Transparency: Where This Is Going
&lt;/h2&gt;

&lt;p&gt;The current state of the art for scalable key verification is key transparency — an approach that uses cryptographically verifiable append-only logs to make a service's claimed key bindings auditable. The idea: instead of trusting that a service is giving you the correct key for a contact, you can verify that it has given everyone the same key.&lt;/p&gt;

&lt;p&gt;If a service shows user A one key for user B and shows user C a different key for user B, that inconsistency can be detected by auditors who compare their views of the log. An attacker or compromised server that substitutes keys is detectable after the fact. Google's Key Transparency project and WhatsApp's implementation of auditable key directory are examples of this approach in practice.&lt;/p&gt;

&lt;p&gt;Key transparency doesn't eliminate the problem of establishing who controls a given key in the first place — it just makes key substitution attacks visible. But combined with TOFU semantics, it substantially raises the bar for what an attacker needs to do to intercept encrypted communication undetected.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Good Key Verification Looks Like Today
&lt;/h2&gt;

&lt;p&gt;For most encrypted messaging use cases, the practical answer to key verification is a layered combination:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;TOFU as baseline:&lt;/strong&gt; Trust the first key you see, and alert loudly if it changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Out-of-band verification for high-stakes contacts:&lt;/strong&gt; For contacts where impersonation would be catastrophic, verify safety numbers or key fingerprints in person or via a second communication channel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Key transparency for service-level accountability:&lt;/strong&gt; Where the messaging provider operates a transparent key directory, auditors can detect systematic substitution attacks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The web of trust was a beautiful attempt to solve this with pure cryptography and social coordination. Its failure wasn't a failure of the math — it was a lesson in the limits of expecting security-critical manual steps from people who have more pressing things to do. The systems that have actually deployed at scale are the ones that require the least from users while still catching the most dangerous attack patterns.&lt;/p&gt;

&lt;p&gt;PGP's web of trust still exists and still works within the communities that use it. For email encryption between technically engaged users who've met at key signing parties and carefully maintained their keyrings, it provides real, verifiable trust. That's a niche — but it's an honest one, and it's worth understanding what it actually delivers and what it doesn't.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/pgp-web-of-trust-explained/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>How Password Managers Actually Protect Your Data</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Tue, 05 May 2026 08:20:20 +0000</pubDate>
      <link>https://dev.to/havenmessenger/how-password-managers-actually-protect-your-data-944</link>
      <guid>https://dev.to/havenmessenger/how-password-managers-actually-protect-your-data-944</guid>
      <description>&lt;p&gt;A password manager is the highest-leverage security upgrade most people can make — more impactful than any VPN, any antivirus, any clever browser extension. But the 2022 LastPass breach made visible something the security community had long known: not all password managers are equally protected, and the difference lies in the cryptography, not the marketing.&lt;/p&gt;

&lt;p&gt;Password reuse is responsible for the majority of successful account takeovers. The pattern is straightforward: a breach at a low-security site exposes your email and password; attackers stuff those credentials into every other site automatically; accounts fall. The only defense is unique passwords everywhere, which is impossible to remember, which is where password managers come in.&lt;/p&gt;

&lt;p&gt;Most people understand what password managers do. Fewer understand how they protect the vault itself — what happens if the password manager company is breached, and what separates a vault you can trust from one that looks trustworthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Zero-Knowledge Architecture
&lt;/h2&gt;

&lt;p&gt;A well-designed password manager never sees your master password. The architecture works like this: your master password is run through a key derivation function — typically PBKDF2-SHA256 or Argon2 — to produce an encryption key. That key is used to encrypt your vault locally, before anything reaches the server. The server receives an encrypted blob. The server's breach exposes the encrypted blob. The encrypted blob, without the key derived from your master password, is useless.&lt;/p&gt;

&lt;p&gt;This is called zero-knowledge architecture: the service provider has zero ability to decrypt your data, because they never have the key.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key Derivation Functions&lt;/strong&gt;: PBKDF2 (Password-Based Key Derivation Function 2) stretches a password into a cryptographic key by running it through a hash function many thousands of times — making brute-force attacks expensive. Argon2 is newer, designed to be memory-hard, making GPU-accelerated cracking harder. Both are substantially stronger than running the password through a simple hash once.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The security of the entire system rests on one thing: the strength of your master password. A short, dictionary-adjacent master password means an attacker who steals your vault can crack the key. A long, high-entropy master password means they can't — at least not with any currently foreseeable computing power.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the LastPass Breach Actually Revealed
&lt;/h2&gt;

&lt;p&gt;In August 2022, LastPass disclosed that an attacker had stolen a copy of their source code. In November, they disclosed that the attacker had also stolen an encrypted backup of customer vault data. The full picture emerged over the following months.&lt;/p&gt;

&lt;p&gt;The architectural problem the breach exposed wasn't that LastPass's zero-knowledge claim was false — the vault data was encrypted. The problems were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low PBKDF2 iteration counts&lt;/strong&gt; — LastPass had defaulted to 5,000 PBKDF2 iterations for many legacy accounts. Current recommendations are 600,000+ iterations. Low iteration counts make brute-force attacks orders of magnitude cheaper. Accounts with weak master passwords were crackable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;URL metadata was unencrypted&lt;/strong&gt; — The URLs of sites in your vault were stored in plaintext in the LastPass vault format. An attacker who stole the encrypted vault still knew which sites you had accounts on. For many users, this is sensitive information regardless of whether the passwords themselves were crackable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Breach notification was slow and unclear&lt;/strong&gt; — The full scope of what was stolen wasn't communicated clearly until months after the initial breach notice, leaving users without accurate information to assess their exposure.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;The LastPass breach didn't break zero-knowledge encryption. It revealed that the implementation details — iteration counts, what metadata is encrypted vs. plaintext — matter enormously, and that users can't rely on marketing claims alone.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What Good Vault Design Looks Like
&lt;/h2&gt;

&lt;p&gt;Across well-regarded password managers, a few design properties separate the stronger implementations from the weaker ones:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Why It Matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;High KDF iteration count&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;PBKDF2-SHA256 at 600,000+ iterations, or Argon2id with appropriate memory/time parameters. Slows brute-force of captured vaults.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;All vault fields encrypted&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;URLs, notes, usernames, and passwords should all be encrypted. Plaintext URLs expose your account inventory even when passwords are safe.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Open-source client&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Zero-knowledge claims are verifiable only if the client code is auditable. Proprietary clients require trust; open-source clients can be inspected.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Independent security audits&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Published audit reports from reputable firms, on a regular cadence, with findings disclosed — not just "we've been audited."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Self-hosting option&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;If the vendor is breached, a self-hosted vault is not affected. Options like Bitwarden and Vaultwarden support this.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Cloud vs. Local vs. Self-Hosted
&lt;/h2&gt;

&lt;p&gt;Password managers fall into three broad storage models, each with different trade-offs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cloud-synced&lt;/strong&gt; (1Password, Bitwarden, Dashlane): Your encrypted vault lives on the vendor's servers and syncs across your devices automatically. Convenience is highest. Security depends on the vendor's architecture and your master password strength. If the vendor's servers are breached, your encrypted vault is exposed to offline cracking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local-only&lt;/strong&gt; (KeePassXC): Your vault is a single encrypted file you store wherever you want. No vendor holds your data. You're responsible for backup and sync. KeePassXC is open-source, well-audited, and has been in continuous development since 2016. The trade-off is operational complexity — setting up sync via Syncthing, iCloud, or a network share, and maintaining your own backups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-hosted&lt;/strong&gt; (Vaultwarden): You run a Bitwarden-compatible server on your own infrastructure. You get cloud-sync convenience with a server you control. You're responsible for keeping the server secured, updated, and backed up. This is the right choice for technically capable users who want the Bitwarden UX without the Bitwarden vendor dependency.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Master Password Is Everything
&lt;/h2&gt;

&lt;p&gt;All the architectural sophistication in the world doesn't matter if your master password is "Password1!" or your spouse's name and birthdate. The entire security model — for every cloud-synced password manager — collapses to the quality of the master password, because it's the only thing between an attacker and your vault key.&lt;/p&gt;

&lt;p&gt;A good master password is long and random. Four or five random common words (the "diceware" approach) generates passwords that are memorable and extremely resistant to brute-force: the combination space for a 5-word phrase from a 7,776-word word list is 7776^5 ≈ 2.8 × 10^19. At a billion guesses per second, that takes about 900 years to exhaust — and the actual iteration count multiplies that.&lt;/p&gt;

&lt;p&gt;Pairing a strong master password with a strong second factor is the baseline for anyone keeping sensitive accounts in a cloud-synced vault. TOTP or a hardware key as the second factor means an attacker with only your vault file can't log in to steal a fresh copy, even if they crack the master password offline.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Passkeys Change the Picture
&lt;/h2&gt;

&lt;p&gt;Passkeys are beginning to replace passwords for many high-value accounts. As passkey adoption grows, the role of the password manager shifts: instead of storing passwords, it stores private keys for FIDO2 assertions. This doesn't eliminate the need for a password manager — it changes what it holds. The vault encryption question remains equally important.&lt;/p&gt;

&lt;p&gt;Most major password managers are adding passkey storage. The architecture is the same: the private key material lives in your encrypted vault, derived from your master password. The security properties transfer directly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/password-manager-security/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>Supply Chain Attacks: When Your Privacy Tool Gets Compromised</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Mon, 04 May 2026 08:21:21 +0000</pubDate>
      <link>https://dev.to/havenmessenger/supply-chain-attacks-when-your-privacy-tool-gets-compromised-28lg</link>
      <guid>https://dev.to/havenmessenger/supply-chain-attacks-when-your-privacy-tool-gets-compromised-28lg</guid>
      <description>&lt;p&gt;On March 29, 2024, Andres Freund — a Microsoft engineer and PostgreSQL contributor — noticed something odd while investigating unexplained CPU usage in SSH on a Debian testing build. liblzma, the compression library bundled with XZ Utils, was performing extra work it had no business doing. After careful analysis, Freund had found one of the most sophisticated software supply chain attacks ever discovered in the open-source ecosystem.&lt;/p&gt;

&lt;p&gt;The attacker, operating under the pseudonym "Jia Tan," had spent roughly two years earning maintainer trust on the XZ Utils project. They submitted legitimate bug fixes, took on maintenance duties, and eventually introduced a carefully hidden backdoor — CVE-2024-3094 — into the build system scripts. The payload patched the RSA key decryption path in liblzma in a way that would have allowed remote code execution via sshd on systems where systemd had linked against the compromised library.&lt;/p&gt;

&lt;p&gt;The only reason this attack was caught before widespread deployment is that Freund was running a pre-release Debian version and noticed the CPU anomaly. It was extraordinarily close.&lt;/p&gt;

&lt;h2&gt;
  
  
  What "Supply Chain Attack" Actually Means
&lt;/h2&gt;

&lt;p&gt;A supply chain attack targets not the software you are using directly, but something that software depends on — the chain between source code and the binary running on your machine. There are several distinct categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maintainer compromise:&lt;/strong&gt; An attacker gains commit access to an upstream project via social engineering (as in XZ) or by compromising a maintainer account directly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Package registry injection:&lt;/strong&gt; Malicious packages published to npm, PyPI, or RubyGems via typosquatting, or legitimate package names hijacked when maintainers abandon them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build system compromise:&lt;/strong&gt; The infrastructure that builds and signs releases is compromised, allowing injection of malicious code into legitimate source. SolarWinds SUNBURST worked this way — the Orion build pipeline was compromised and signed malicious updates distributed to customers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDN or distribution compromise:&lt;/strong&gt; The Polyfill.io incident in 2024 — a widely-used CDN serving JavaScript polyfills was acquired and began injecting malicious code into responses for certain users.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Reading source code on GitHub tells you what the developers intended to ship. It does not tell you what is actually running on your machine. The gap between those two things is where supply chain attacks live.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Privacy Tools Are Particularly Valuable Targets
&lt;/h2&gt;

&lt;p&gt;Compromising a video game is embarrassing. Compromising Signal, a PGP key management tool, or a VPN client is strategically valuable. Privacy software is designed to protect high-value targets — dissidents, journalists, lawyers, activists — which makes it a priority for sophisticated adversaries including nation-state actors.&lt;/p&gt;

&lt;p&gt;A modern encryption application depends on cryptographic libraries (OpenSSL, libsodium), compression libraries (zlib, liblzma), networking stacks, build tools, package managers and their dependency trees, and CI/CD systems that build and sign releases. Each of these is a potential entry point.&lt;/p&gt;

&lt;p&gt;The 2018 event-stream npm incident illustrated this: a popular Node.js library was transferred to a new maintainer who injected code targeting the Copay Bitcoin wallet — a targeted attack buried in a dependency chain most users never examine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reproducible Builds: The Strongest Available Mitigation
&lt;/h2&gt;

&lt;p&gt;Reproducible builds make the build process deterministic: given the same source code, build environment, and tools, the output binary is bit-for-bit identical no matter who runs it or when. This allows independent parties to verify that what is being distributed actually matches the published source.&lt;/p&gt;

&lt;p&gt;Tor Browser and Debian have invested heavily in reproducible builds. Signal's desktop client has moved to reproducible builds as well. When reproducible builds are in place, a compromised build server cannot silently alter what gets distributed — independent rebuilds would produce a different hash, and the discrepancy would be detectable.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reproducible builds do not prevent supply chain attacks. They make them detectable — which is a meaningful difference.&lt;br&gt;
— Reproducible Builds project documentation&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Code Signing, Sigstore, and the Package Verification Landscape
&lt;/h2&gt;

&lt;p&gt;Code signing is a baseline protection against tampered distribution packages. What it cannot tell you is whether the maintainer who signed the release had already been compromised.&lt;/p&gt;

&lt;p&gt;Sigstore, supported by the OpenSSF with contributions from Google and Red Hat, uses short-lived certificates tied to an identity provider and appends signatures to a public, append-only transparency log (Rekor). This makes it possible to audit the entire history of what was signed, by whom, and when. The npm ecosystem began integrating Sigstore-based provenance attestations in 2023. PyPI has similar initiatives in progress.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Users Can Realistically Do
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Action&lt;/th&gt;
&lt;th&gt;Protects Against&lt;/th&gt;
&lt;th&gt;Difficulty&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Verify release signatures&lt;/td&gt;
&lt;td&gt;Tampered distribution packages&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pin dependency versions&lt;/td&gt;
&lt;td&gt;Automatic ingestion of malicious updates&lt;/td&gt;
&lt;td&gt;Low–Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Use software with reproducible builds&lt;/td&gt;
&lt;td&gt;Compromised build infrastructure&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Prefer software with fewer dependencies&lt;/td&gt;
&lt;td&gt;Smaller attack surface&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Human Element Is the Hardest Problem
&lt;/h2&gt;

&lt;p&gt;The XZ attack worked because it exploited the social dynamics of open-source maintenance. The attacker did not break any cryptography. They built trust over two years, identified a maintainer under stress, and patiently waited to introduce malicious code that looked like a routine build improvement.&lt;/p&gt;

&lt;p&gt;Open-source projects run on the goodwill of unpaid, underfunded, chronically time-poor maintainers. This is not a technical problem with a technical solution. Some projects are responding with stricter controls on build script modifications, multi-person approval requirements for security-critical code paths, and formal threat modeling that treats maintainer accounts as a potential attack surface.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Evaluating Privacy Software
&lt;/h2&gt;

&lt;p&gt;When choosing a privacy-focused application, the questions worth asking go beyond "is the protocol sound?" to include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the project publish signed releases? Do you verify them?&lt;/li&gt;
&lt;li&gt;Are builds reproducible and independently verified?&lt;/li&gt;
&lt;li&gt;How many people have commit access to the main codebase and its critical dependencies?&lt;/li&gt;
&lt;li&gt;Is the dependency tree auditable and pinned?&lt;/li&gt;
&lt;li&gt;Is the build infrastructure itself protected?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open-source is still meaningfully more auditable than proprietary software. But "the code is public" does not mean "the code you are running has been audited." Those are different claims, and the gap between them is exactly where supply chain attacks operate.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/supply-chain-attacks-privacy-software/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>opensource</category>
      <category>cryptography</category>
    </item>
    <item>
      <title>How Group Encrypted Messaging Actually Works</title>
      <dc:creator>Haven Messenger</dc:creator>
      <pubDate>Sun, 03 May 2026 08:18:02 +0000</pubDate>
      <link>https://dev.to/havenmessenger/how-group-encrypted-messaging-actually-works-3ao7</link>
      <guid>https://dev.to/havenmessenger/how-group-encrypted-messaging-actually-works-3ao7</guid>
      <description>&lt;p&gt;Adding a third person to an encrypted conversation seems like it should be simple. It isn't. The cryptographic properties that make 1:1 messaging secure — forward secrecy, post-compromise security, deniability — become significantly harder to preserve as group size grows.&lt;/p&gt;

&lt;p&gt;When Signal introduced group chats, they faced a problem that doesn't exist in 1:1 messaging: how do you efficiently encrypt a single message for many recipients while preserving strong security guarantees? The naive answer — encrypt the message separately for each recipient — works but scales poorly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Naive Approach: Pairwise Encryption
&lt;/h2&gt;

&lt;p&gt;The simplest group messaging implementation is pairwise encryption: the sender establishes a separate secure channel with each group member and sends the message individually to each one. This provides full forward secrecy because each channel uses the Signal Protocol (Double Ratchet), which rotates keys on every message.&lt;/p&gt;

&lt;p&gt;Signal actually used this approach for groups in its earlier implementations. The security properties are strong — each pairwise channel is as secure as a 1:1 conversation. But the cost is linear scaling: message overhead grows proportionally with group size. For a group of 100 people, sending one message requires 100 individual encryptions, 100 individual network deliveries, and 100 individual ratchet advances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sender Keys: Signal's Group Optimization
&lt;/h2&gt;

&lt;p&gt;Signal's current group implementation uses Sender Keys to solve the scaling problem. Each group member generates a Sender Key — a chain key used to derive a sequence of encryption keys for that member's outgoing messages. The Sender Key is distributed to each group member individually, but after that initial distribution, the sender encrypts each message once with the current key from their chain.&lt;/p&gt;

&lt;p&gt;This reduces message overhead from O(N) to O(1) for subsequent messages. The tradeoff is in forward secrecy characteristics: Sender Keys advance forward through a hash chain, providing forward secrecy against passive observers. But they don't provide the full Double Ratchet's break-in recovery — if a member's current state is compromised, the attacker can decrypt subsequent messages until the chain is reset.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Key insight&lt;/strong&gt;: Sender Keys provide forward secrecy but weaker post-compromise security. This is an explicit tradeoff for group scalability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The Member Addition and Removal Problem
&lt;/h2&gt;

&lt;p&gt;The hardest problem in secure group messaging isn't encrypting messages — it's managing membership changes securely.&lt;/p&gt;

&lt;p&gt;When a new member joins a group, they should not be able to decrypt messages sent before they joined — a property called &lt;strong&gt;forward secrecy for new members&lt;/strong&gt;. When a member is removed, they should not be able to decrypt future messages — called &lt;strong&gt;post-remove secrecy&lt;/strong&gt;. Achieving the latter requires rotating the group key after removal, which for a group of 1,000 members means 999 individual key distributions.&lt;/p&gt;

&lt;h2&gt;
  
  
  MLS: A Cryptographically Sound Group Protocol
&lt;/h2&gt;

&lt;p&gt;Messaging Layer Security (MLS, RFC 9420) was designed from first principles to solve group messaging security at scale. Rather than Sender Keys' O(N) key distribution on membership change, MLS uses a binary tree structure where group members occupy the leaves and internal nodes hold intermediate keys.&lt;/p&gt;

&lt;p&gt;This "ratchet tree" allows key updates to propagate efficiently. When a member updates their key material (a "commit"), only the nodes on the path from their leaf to the root need updating. In a balanced tree with N members, that's O(log N) operations rather than O(N) — roughly 10 operations for 1,000 members instead of 999.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Protocol&lt;/th&gt;
&lt;th&gt;Message Cost&lt;/th&gt;
&lt;th&gt;Membership Change Cost&lt;/th&gt;
&lt;th&gt;Post-Compromise Security&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Pairwise (Signal v1)&lt;/td&gt;
&lt;td&gt;O(N)&lt;/td&gt;
&lt;td&gt;O(N) remove&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sender Keys (Signal current)&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;td&gt;O(N) re-distribute&lt;/td&gt;
&lt;td&gt;Weaker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MLS (RFC 9420)&lt;/td&gt;
&lt;td&gt;O(1)&lt;/td&gt;
&lt;td&gt;O(log N) tree update&lt;/td&gt;
&lt;td&gt;Strong&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Epochs and the Concurrent Commit Problem
&lt;/h2&gt;

&lt;p&gt;MLS introduces the concept of an &lt;em&gt;epoch&lt;/em&gt; — a version of the group state. Every time the group membership changes or key material is updated, the group advances to a new epoch. Messages are encrypted under the epoch key valid at the time of sending.&lt;/p&gt;

&lt;p&gt;This provides a clean boundary for forward secrecy, allows recovery from device compromise by rotating to a new epoch, and creates a cryptographic audit trail of group state changes.&lt;/p&gt;

&lt;p&gt;The challenge: in a distributed system, two members might independently try to advance the epoch simultaneously — a "concurrent commit." MLS has a defined resolution mechanism, but it adds complexity that Sender Key approaches avoid.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Applications Are Deploying
&lt;/h2&gt;

&lt;p&gt;As of 2026, MLS adoption is growing. WhatsApp announced MLS adoption. Apple Messages began using MLS for group chats. Matrix has an MLS implementation in development.&lt;/p&gt;

&lt;p&gt;Haven uses MLS for encrypted group chat, specifically for the O(log N) membership change overhead and strong post-compromise security guarantees. Signal continues using Sender Keys for large groups — both positions are defensible depending on group size distributions and adversary models.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Users
&lt;/h2&gt;

&lt;p&gt;Questions worth asking about any secure group messaging app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does removing a member prevent them from reading future messages?&lt;/li&gt;
&lt;li&gt;Can a new member read history from before they joined?&lt;/li&gt;
&lt;li&gt;Is the group protocol documented and independently audited?&lt;/li&gt;
&lt;li&gt;What happens if one member's device is compromised?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Forward secrecy in 1:1 contexts is well-understood; in group contexts it requires explicit design choices that vary significantly between implementations.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://havenmessenger.com/blog/posts/secure-group-chat-protocols/" rel="noopener noreferrer"&gt;havenmessenger.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>privacy</category>
      <category>encryption</category>
      <category>cryptography</category>
    </item>
  </channel>
</rss>
