About a year ago, I was in my dorm room copying a 4GB project file to a flash drive to walk across campus. It felt absurd. Every cloud service either had file size limits, showed ads, or asked me to trust them with my data. So I started building something.
That project became FileShot.io — a zero-knowledge, end-to-end encrypted file sharing platform. This post isn't a product pitch. It's about the specific technical problem I had to solve: doing AES-256-GCM encryption entirely in the browser, before a single byte of data leaves the user's machine.
What "zero-knowledge" actually means here
Zero-knowledge architecture means the server genuinely cannot read your files. Not "we promise not to look" — the server can't look, because it never has the decryption key.
The key is derived from a password on the client side, and it lives only in the URL fragment (#) — the part of the URL the browser never sends to the server. When someone visits a FileShot link, their browser downloads the encrypted blob and derives the key locally from the fragment. The server sees only ciphertext.
The Web Crypto API: what I wish someone had explained to me
When I started, I assumed I'd use a library. Turns out browsers ship a robust cryptographic API natively: window.crypto.subtle. It's available in all modern browsers, hardware-accelerated where possible, and designed for exactly this.
Here's a minimal AES-256-GCM encryption flow:
async function encryptFile(fileBuffer, password) {
const encoder = new TextEncoder();
const salt = crypto.getRandomValues(new Uint8Array(16));
const keyMaterial = await crypto.subtle.importKey(
'raw',
encoder.encode(password),
'PBKDF2',
false,
['deriveKey']
);
const key = await crypto.subtle.deriveKey(
{
name: 'PBKDF2',
salt,
iterations: 310000,
hash: 'SHA-256'
},
keyMaterial,
{ name: 'AES-GCM', length: 256 },
false,
['encrypt']
);
const iv = crypto.getRandomValues(new Uint8Array(12));
const ciphertext = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv },
key,
fileBuffer
);
return { salt, iv, ciphertext };
}
A few things I learned the hard way:
The IV matters more than most tutorials imply. AES-GCM is an authenticated encryption scheme. The 96-bit IV must be random and unique per encryption operation — reusing an IV with the same key is a catastrophic failure that can expose the key stream entirely. crypto.getRandomValues is the right call here.
GCM gives you integrity checking for free. The authentication tag appended to the ciphertext fails verification if anyone tampers with even a single bit. No separate HMAC step needed.
PBKDF2 iteration count matters. 310,000 iterations is OWASP's current recommendation for PBKDF2-SHA256. I started with 100,000 — fine for 2020, outdated today. The slowdown is a few hundred milliseconds on weak hardware. Worth it.
What I didn't expect: streaming large files
The naive implementation — file.arrayBuffer() then encrypt the whole thing — works great until someone tries to upload a 2GB video. Then the tab crashes.
I rewrote the encryption layer to use the Streams API. Read the file in chunks, encrypt each chunk, stream the encrypted data directly to the upload. This was one of the harder parts:
async function* streamEncryptChunks(file, key) {
const CHUNK_SIZE = 5 * 1024 * 1024; // 5MB
let offset = 0;
while (offset < file.size) {
const slice = file.slice(offset, offset + CHUNK_SIZE);
const buffer = await slice.arrayBuffer();
const iv = crypto.getRandomValues(new Uint8Array(12));
const encrypted = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv },
key,
buffer
);
yield { iv, data: new Uint8Array(encrypted) };
offset += CHUNK_SIZE;
}
}
Each chunk gets its own IV. Using the same IV across chunks would be a vulnerability — an attacker could compare ciphertext blocks to learn something about repeated plaintext patterns.
In production there's more to it: multipart uploads, a Web Worker for crypto so the UI doesn't freeze, upload progress tracking.
The URL fragment trick
The zero-knowledge design depends on the #fragment portion of the URL never being sent to servers. Browsers don't include it in HTTP requests.
When a user uploads and sets a password, key material is encoded into the URL fragment:
https://fileshot.io/f/abc123#keydata=...
Anyone with the full URL can decrypt in their browser. Anyone intercepting only the HTTP traffic gets nothing useful. The server storing the encrypted blob has no idea what's in it.
This pattern is well-established — Keybase used it, some Signal features use it — but implementing it from scratch teaches you exactly why it works.
Things I'd do differently
Argon2 instead of PBKDF2. Argon2id is memory-hard and significantly more resistant to GPU cracking. PBKDF2 is fine and widely supported, but if starting over I'd use Argon2 via WebAssembly. No native Web Crypto API support means shipping a WASM module — worth the tradeoff for a new project.
Separate the encryption key from access control. The current design couples "the key to decrypt" with "permission to access." A cleaner design uses asymmetric key wrapping — the file's AES key encrypted with the recipient's public key. But that requires accounts, which adds friction most casual users won't accept for a simple file share.
A real cryptographic audit before launch. I self-audited, which is the worst kind of audit. I'm too close to the code to catch subtle issues in my own mental model.
If you're building something with client-side crypto, crypto.subtle is more capable than I expected. The biggest hurdles were streaming large files without crashing the browser and understanding exactly when GCM's authentication tag is computed and verified.
The project is at fileshot.io — unlimited free file sharing, no account needed, encrypted in your browser before upload. Questions or pushback on the crypto choices welcome in the comments.
Top comments (0)