DEV Community

Haven Messenger
Haven Messenger

Posted on • Originally published at havenmessenger.com

The PGP Web of Trust: Why Key Verification Is Harder Than It Looks

OpenPGP's web of trust was one of the most ambitious ideas in the history of cryptography: a decentralized system where ordinary users could vouch for each other's keys without any central authority. Phil Zimmermann built it into PGP in the early 1990s, and it mostly didn't work. Understanding why gets at something fundamental about trust, coordination, and the gap between elegant cryptography and real human behavior.

The key verification problem is this: when you receive a message claiming to be from someone, how do you know the public key you used to verify it actually belongs to them, rather than to an attacker who generated a key with their name on it?

This isn't a theoretical concern. Key substitution attacks — where an attacker convinces a target to use the attacker's public key instead of the intended recipient's — have been demonstrated against PGP users who didn't verify key fingerprints. The encryption is working perfectly; the problem is that it's encrypting to the wrong person.

What the Web of Trust Does

PGP's solution was to make key authentication a social problem rather than a hierarchical one. If Alice wants to confirm that a key really belongs to Bob, she can look for a signature chain: does anyone Alice already trusts vouch that this key is Bob's? If Charlie, whom Alice trusts completely, has signed Bob's key after verifying his identity out-of-band, Alice can extend that trust to Bob's key.

The mechanism works like this: each PGP user has a key pair. When you verify someone's identity — typically by meeting in person, checking their photo ID, and comparing the key fingerprint — you sign their public key with your private key. That signature becomes part of their key's public record on key servers. Others who trust you can then trust that key transitively.

PGP defines three levels of trust you can assign to key owners:

  • Ultimate trust: Your own key. Keys you sign are trusted completely.
  • Full trust: You trust this person to carefully verify others' identities before signing their keys.
  • Marginal trust: You somewhat trust their key signing. Typically, three marginally trusted signatures are required to establish a key as valid in your keyring.

The math works. Given a sufficiently dense signature graph, you can establish a path of verified trust to almost any key without needing a central authority. This was a genuine cryptographic insight in 1991, and it influenced how the broader field thought about decentralized trust for years.

Why the Web of Trust Mostly Didn't Work

The web of trust requires two things from ordinary users that ordinary users reliably don't do: carefully verify identities before signing keys, and actively maintain their participation in the signing network.

Key signing parties — formal events where participants exchange key fingerprints and sign each other's keys — were the theoretical backbone of building the web. In practice, they were attended almost exclusively by cryptography enthusiasts and open-source developers. Most people who wanted to use encrypted email never attended one, never signed anyone's key, and never got their own key signed by anyone in their contact list.

The result was a web of trust that was dense in a small community of technical users and essentially nonexistent for everyone else. A journalist trying to receive tips via encrypted email, a lawyer communicating with a client, a family member trying to exchange private messages — none of them had paths of trust to each other's keys.

Web of trust failures aren't cryptographic failures. The algorithm is sound. The failure is that building trust graphs requires ongoing human coordination that doesn't happen organically at scale. Cryptographic systems that require non-cryptographers to perform careful manual verification steps will be skipped in practice.

There were also security problems within the community that did participate. In 2019, SKS keyserver operators documented a "certificate spamming" attack in which malicious actors flooded public key servers with enormous numbers of signatures on the keys of prominent developers, causing GnuPG to crash or become extremely slow when trying to process those keys. The public key infrastructure had no mechanism to prevent this. The SKS keyserver network was essentially deprecated as a result, replaced by Hagrid (keys.openpgp.org), which strips third-party signatures by default.

The CA Alternative: Different Trade-offs, Same Fundamental Problem

The HTTPS ecosystem solved key verification differently: with Certificate Authorities (CAs). A CA is an organization that vouches for domain ownership by issuing signed certificates. Your browser ships with a list of trusted CAs; when you connect to a website, the server presents a certificate signed by one of those CAs, and your browser trusts it.

This model scales beautifully in practice — the CA infrastructure is why HTTPS deployment has become nearly universal. But it centralizes trust in the CA list, and the CA list is controlled by browser vendors and operating system developers. Rogue CA behavior has happened: in 2011, DigiNotar was compromised and issued fraudulent certificates for dozens of domains including Google. The entire CA was removed from trust lists and subsequently went bankrupt.

Model Scales to Regular Users Central Failure Point User Verification Required
PGP Web of Trust No None Manual, out-of-band
CA / PKI (HTTPS) Yes CA compromise None (automatic)
TOFU Yes None Detect changes only
Key Transparency Emerging Log server Automated (auditors)

Trust on First Use: The Pragmatic Middle Ground

SSH made a different choice early on. When you connect to a new SSH server, the client stores the server's public key fingerprint. On subsequent connections, it checks that the key matches. If it doesn't, you get a loud warning: something changed, and you need to investigate.

This is Trust on First Use (TOFU). It doesn't establish identity at first contact — it just remembers what you saw and detects if it changes. For most use cases, it's enough. An attacker trying to impersonate the server you've been connecting to for six months needs to have been in position during your very first connection. If they weren't, subsequent interception attempts trigger a warning.

Signal uses a variant of TOFU. When you first communicate with a contact, their key is trusted. If it changes later — because they got a new phone, or because someone is attempting a man-in-the-middle attack — the app displays a "safety number changed" warning and allows you to verify the new number out-of-band.

Key Transparency: Where This Is Going

The current state of the art for scalable key verification is key transparency — an approach that uses cryptographically verifiable append-only logs to make a service's claimed key bindings auditable. The idea: instead of trusting that a service is giving you the correct key for a contact, you can verify that it has given everyone the same key.

If a service shows user A one key for user B and shows user C a different key for user B, that inconsistency can be detected by auditors who compare their views of the log. An attacker or compromised server that substitutes keys is detectable after the fact. Google's Key Transparency project and WhatsApp's implementation of auditable key directory are examples of this approach in practice.

Key transparency doesn't eliminate the problem of establishing who controls a given key in the first place — it just makes key substitution attacks visible. But combined with TOFU semantics, it substantially raises the bar for what an attacker needs to do to intercept encrypted communication undetected.

What Good Key Verification Looks Like Today

For most encrypted messaging use cases, the practical answer to key verification is a layered combination:

  1. TOFU as baseline: Trust the first key you see, and alert loudly if it changes.
  2. Out-of-band verification for high-stakes contacts: For contacts where impersonation would be catastrophic, verify safety numbers or key fingerprints in person or via a second communication channel.
  3. Key transparency for service-level accountability: Where the messaging provider operates a transparent key directory, auditors can detect systematic substitution attacks.

The web of trust was a beautiful attempt to solve this with pure cryptography and social coordination. Its failure wasn't a failure of the math — it was a lesson in the limits of expecting security-critical manual steps from people who have more pressing things to do. The systems that have actually deployed at scale are the ones that require the least from users while still catching the most dangerous attack patterns.

PGP's web of trust still exists and still works within the communities that use it. For email encryption between technically engaged users who've met at key signing parties and carefully maintained their keyrings, it provides real, verifiable trust. That's a niche — but it's an honest one, and it's worth understanding what it actually delivers and what it doesn't.

Originally published at havenmessenger.com

Top comments (0)