Here's a URL:
https://anon.li/d/abc123#U2FsdGVkX1...
The thing after the # is an AES-256 encryption key. The server hosting the file behind abc123 cannot see it, cannot log it, and cannot reproduce it from anything else it stores. If the server gets owned tomorrow, the attacker walks away with encrypted blobs and nothing to decrypt them with.
This isn't marketing copy. It's a property of HTTP that has been there since 1996 and that almost nobody uses for what it's good at. Let's pull on it.
The HTTP fragment is special
When your browser fetches https://example.com/page?foo=bar#section, here's what's actually sent over the wire:
GET /page?foo=bar HTTP/1.1
Host: example.com
The #section part — the fragment identifier - never appears in the request line, never appears in headers, never reaches the origin server. RFC 3986 defines it as client-side only: the browser uses it to scroll to anchors, the JavaScript runtime can read it via location.hash, but it stops at the network boundary.
This is non-negotiable browser behavior. It's not a feature you opt into. It's not a CDN setting. Every conformant HTTP client in existence treats it the same way, because if it didn't, every "back to top" anchor would generate server log noise.
So: you have a side channel that travels with your URL but doesn't reach your server. What can you do with that?
Stick a key in it
The trick that products like anon.li's Drop, Send (RIP), and a handful of others use is roughly:
- Generate an AES-256 key in the browser.
- Encrypt the file in the browser.
- Upload only the ciphertext.
- Build a share URL where the path is the file's ID and the fragment is the base64 of the key.
- Hand that URL to the recipient.
When the recipient clicks the link, their browser fetches /d/abc123 (the server returns ciphertext + metadata), but the #U2FsdGVkX1... part stays in their browser. Client-side JavaScript reads location.hash, decrypts, and renders.
The server never has the key. Not "promises not to look at the key." Cryptographically cannot have the key. That's the difference between zero-knowledge and just "we encrypt at rest, trust us."
// Sender side
const key = await crypto.subtle.generateKey(
{ name: 'AES-GCM', length: 256 },
true,
['encrypt', 'decrypt']
);
const rawKey = await crypto.subtle.exportKey('raw', key);
const keyB64 = btoa(String.fromCharCode(...new Uint8Array(rawKey)))
.replace(/\+/g, '-').replace(/\//g, '_').replace(/=+$/, '');
// ...encrypt and upload ciphertext...
const shareUrl = `https://anon.li/d/${dropId}#${keyB64}`;
// ↑ Anything after the # never touches the server.
// Recipient side
const keyB64 = location.hash.slice(1);
const rawKey = Uint8Array.from(
atob(keyB64.replace(/-/g, '+').replace(/_/g, '/')),
c => c.charCodeAt(0)
);
const key = await crypto.subtle.importKey(
'raw', rawKey, 'AES-GCM', false, ['decrypt']
);
// fetch ciphertext, decrypt with key
The IV problem (and a clean solution)
Once you decide to chunk large files — which you must, because nobody wants to load a 5 GB ArrayBuffer into RAM to encrypt it — you hit a practical question: how do you give every chunk a unique IV?
AES-GCM is brutally unforgiving about IV reuse with the same key. Reuse one IV across two encryptions and you've leaked enough material to recover plaintext XORs and forge messages. Don't reuse IVs.
The naive approach is "generate 12 random bytes per chunk." That works, but now you have to store every IV server-side, increasing metadata size and adding bookkeeping. Worse, you have to be sure your RNG is good for every single chunk.
The pattern Drop uses is cleaner: derive chunk IVs deterministically from one base IV plus the chunk index.
// 12-byte IV: first 8 bytes from base IV, last 4 = chunk index (big-endian u32)
function deriveChunkIv(baseIv, chunkIndex) {
const iv = new Uint8Array(12);
iv.set(baseIv.slice(0, 8), 0);
new DataView(iv.buffer).setUint32(8, chunkIndex, false); // BE
return iv;
}
Properties this gives you:
- One random thing per file. You only need to generate 12 random bytes once. Every chunk's IV is derived deterministically from it.
-
Uniqueness within a file. As long as every chunk has a different
chunkIndex, every chunk gets a different IV. WithUint32you've got 4 billion possible chunks per file before you collide — far more than you'll ever upload. - Uniqueness across files. Different files get different base IVs, so the 8-byte prefix is independent. The combined 12-byte IV space is effectively the same as random per-chunk IVs.
-
No per-chunk metadata needed. The server only stores the base IV. The recipient can reconstruct the chunk IV from
(base IV, chunk index)alone.
There's a nice extra trick: reserve a "magic" chunk index for filename encryption. Drop uses 0xFFFFFFFF, the max u32. Since real chunks count up from 0 and you'll never hit 4 billion of them, this index is guaranteed never to collide with a data chunk's IV — so you can encrypt the filename with the same key, derive its IV the same way, and you're done. No separate key, no separate KDF, no IV bookkeeping.
const filenameIv = deriveChunkIv(baseIv, 0xFFFFFFFF);
// guaranteed never to overlap with chunk 0, 1, 2, ...
This kind of design — where you find a way to not need a thing rather than carefully managing it — is the secret to keeping crypto code reviewable. Every piece of state you eliminate is a piece of state you can't get wrong.
Authenticate, don't just encrypt
AES-GCM is an authenticated cipher: every encryption produces a 16-byte tag that has to verify on decryption. Tamper with the ciphertext, change one bit, and the tag mismatches and decryption errors out.
This matters more than people realize for chunked uploads. Without authentication, an attacker who controls the storage layer (or a CDN, or a proxy) could splice or corrupt chunks and the recipient would silently get garbage — or, worse, plausible-looking but altered data. With per-chunk auth tags, each chunk has integrity. Tampering anywhere in the file fails decryption immediately.
You get this for free with crypto.subtle.encrypt({ name: 'AES-GCM', iv }, key, plaintext). The output already includes the auth tag appended. Don't roll your own.
What an attacker who breaks the server actually gets
Let's audit. Here's what a Drop-style server stores:
| It stores | It does not store |
|---|---|
| Ciphertext (chunks of AES-GCM output) | Plaintext bytes |
| Encrypted filenames | Original filenames |
| File size, MIME type | Encryption key |
| Base IV | Password (if used) |
| Expiry, download counter, owner ID | Anything decryptable from above |
If the server is compromised, the attacker walks away with bytes that decrypt to nothing without a key they don't have. The keys are in URL fragments, which only ever exist in (a) the sender's browser at upload time, (b) the URLs the sender chose to share, and (c) the recipient's browser at download time. Steal the database, you get encrypted noise. That's the design.
What can a malicious server still do? Two real things:
- Refuse service — delete files, lie about expiry, return errors. This is unavoidable; you trust the server for availability, not confidentiality.
-
Serve malicious JavaScript — if you trust the server to ship the decrypt code, a compromised server can ship a backdoored decrypt routine that exfiltrates the key after
location.hashis read. This is the genuine weakness of any browser-based E2EE. Mitigations include CSP, open-source clients you can audit, browser extensions that lock the JS bundle, and reproducible builds. It's a real concern; be honest about it.
Why this design isn't more common
A few reasons:
- It requires real client-side code. Web Crypto isn't hard, but it's harder than
multer.single('file'). - Big files become awkward without chunking + streams.
- Previews are tricky: you have to decrypt to render thumbnails. Drop handles this by treating preview requests as full downloads against the encrypted bytes — and counting them against the download limit, since they expose the same material.
- Search is impossible. The server can't index something it can't read. This is a feature, but it's a constraint.
- Most products don't actually want zero-knowledge. They want plausible-deniability marketing. A zero-knowledge architecture closes off a lot of "we'll add a feature later" doors.
For consumer file sharing, those tradeoffs are usually worth it. For internal tools where the server has business reasons to read files (virus scanning, OCR, indexing), they're usually not. Pick the right tool for the threat model you actually have.
Try it
If you want to see this pattern in action with code you can read end-to-end, anon.li's Drop API docs include the full Node.js encrypt/decrypt flow, and the client implementation is open source. Read the encryption module, then read the share-link generator, then look at what actually goes over the wire in Network tab. Watching the fragment not appear in any request log is the moment the whole thing clicks.
The next time you see # in a URL, look closer. It might be doing more work than you think.
Top comments (0)