I have a CLI tool that generates internal reports and I needed a way to share them with a coworker who doesn't use our internal storage. I didn't want to email PDFs (audit trail), Slack files (search history forever), or spin up presigned S3 URLs (configurable, but I'd be the auth boundary).
What I actually wanted: a one-liner in my CLI that hands the user back a URL, where the file is encrypted client-side, the server sees ciphertext only, and the link expires on its own.
This post is the walkthrough of how I built that against anon.li's Drop API. It's about 150 lines of Node, no dependencies beyond node:crypto. By the end you'll have a script you can drop into any CLI tool to add E2EE file sharing.
The shape of the thing
Drop's upload flow has four steps:
-
Create a drop — POST to
/api/v1/dropwith metadata (IV, file count, expiry). Get back adrop_id. -
Add a file — POST to
/api/v1/drop/:id/filewith the encrypted file's metadata. Get back afileIdand presigned upload URLs (one per chunk). - PUT each chunk to its presigned URL. Capture the ETag returned by the storage layer.
-
Finish the upload — PATCH
/api/v1/drop/:id?action=finishwith the chunk ETags. The drop becomes ready.
Then you build a share URL: https://anon.li/d/<drop_id>#<base64-key>. The fragment carries the AES-256 key, which never reaches the server (browsers strip it from requests — see my other post on why this matters if you're curious).
We'll do everything client-side, in pure Node, with crypto.subtle (Web Crypto, available in Node 16+) and node:crypto for buffer juggling.
Step 1: get an API key
From the anon.li dashboard: API Keys → New. Copy it once, it's only shown at creation. Format is ak_ + 32 hex chars. The server stores only the SHA-256 hash of it, which is why it can't show it again.
For this script, expose it as an env var:
export ANON_API_KEY=ak_yourkeyhere
Step 2: scaffolding and key generation
import crypto from 'node:crypto';
import { readFile } from 'node:fs/promises';
import path from 'node:path';
const API = 'https://anon.li/api/v1';
const KEY = process.env.ANON_API_KEY;
if (!KEY) throw new Error('ANON_API_KEY not set');
const auth = { Authorization: `Bearer ${KEY}` };
// Helper: base64url encoding (RFC 4648)
const b64u = buf => Buffer.from(buf).toString('base64url');
// Helper: derive a unique IV per chunk from a base IV
function deriveChunkIv(baseIv, chunkIndex) {
const iv = Buffer.alloc(12);
baseIv.copy(iv, 0, 0, 8); // first 8 bytes from base IV
iv.writeUInt32BE(chunkIndex, 8); // last 4 = chunk index, big-endian
return iv;
}
The deriveChunkIv helper is the workhorse of this whole script. Drop uses one base IV per file, and every chunk's actual IV is computed deterministically from (baseIv, chunkIndex). That means we only generate randomness once, and we never need to ship per-chunk IVs to the server — they're reproducible from the base IV alone.
Filenames use the same scheme with a reserved chunk index of 0xFFFFFFFF, which is guaranteed not to collide with any real chunk (you'd need 4 billion chunks in one file before hitting it).
async function uploadFile(filePath, { expiry = 7, maxDownloads } = {}) {
const fileBuf = await readFile(filePath);
const filename = path.basename(filePath);
// 256-bit AES key, base IV for this drop, base IV for this file
const key = crypto.randomBytes(32);
const dropIv = crypto.randomBytes(12);
const fileIv = crypto.randomBytes(12);
Three random things, generated locally:
-
key— the AES-256 key. Goes in the URL fragment, never sent to the server. -
dropIv— the base IV for drop-level metadata (title, message). -
fileIv— the base IV for this file's chunks and filename.
For larger files you'd chunk by 50 MB (Drop's default) or whatever fits your memory budget. We'll keep it single-chunk for clarity.
Step 3: encrypt the file and the filename
// Encrypt the file payload (single chunk, index 0)
const chunkIv = deriveChunkIv(fileIv, 0);
const cipher = crypto.createCipheriv('aes-256-gcm', key, chunkIv);
const ciphertext = Buffer.concat([cipher.update(fileBuf), cipher.final()]);
const authTag = cipher.getAuthTag();
const encryptedData = Buffer.concat([ciphertext, authTag]); // 16-byte tag appended
// Encrypt the filename with the reserved chunk index
const nameIv = deriveChunkIv(fileIv, 0xFFFFFFFF);
const nameCipher = crypto.createCipheriv('aes-256-gcm', key, nameIv);
const encryptedName = b64u(Buffer.concat([
nameCipher.update(filename, 'utf8'),
nameCipher.final(),
nameCipher.getAuthTag(),
]));
Two things worth noticing:
The encrypted file is ciphertext || authTag — Drop's wire format is "GCM ciphertext, then 16-byte tag." On decryption you split off the last 16 bytes and feed them to setAuthTag() before final(). This is consistent across chunks too: each chunk on the wire is chunk_ciphertext || tag.
The filename is encrypted with the same key but a different IV (the 0xFFFFFFFF one). One key, many IVs, derived from the base — that's the whole pattern.
Step 4: create the drop
const createRes = await fetch(`${API}/drop`, {
method: 'POST',
headers: { ...auth, 'Content-Type': 'application/json' },
body: JSON.stringify({
iv: b64u(dropIv),
fileCount: 1,
expiry,
maxDownloads,
}),
});
if (!createRes.ok) throw new Error(`create drop: ${createRes.status}`);
const { data: drop } = await createRes.json();
const dropId = drop.drop_id;
expiry is days (max depends on plan: 3 free, 7 plus, 30 pro). maxDownloads is optional. If you wanted to attach encrypted metadata (a description, JSON tags, anything), you'd encrypt it with the same key + dropIv + 0xFFFFFFFF reserved index and pass it as encryptedMessage here. The server stores the ciphertext and can't read it.
Step 5: register the file, get presigned upload URLs
const fileRes = await fetch(`${API}/drop/${dropId}/file`, {
method: 'POST',
headers: { ...auth, 'Content-Type': 'application/json' },
body: JSON.stringify({
size: encryptedData.length,
encryptedName,
iv: b64u(fileIv),
mimeType: 'application/octet-stream',
chunkCount: 1,
chunkSize: encryptedData.length,
}),
});
if (!fileRes.ok) throw new Error(`add file: ${fileRes.status}`);
const file = await fileRes.json();
Response shape:
{
"fileId": "file123",
"s3UploadId": "upload-id",
"uploadUrls": {
"1": "https://r2-presigned-url..."
}
}
uploadUrls is keyed by chunk number (1-indexed). For our one-chunk file we have one URL. For a multi-chunk file you'd get N URLs and PUT to each.
The presigned URLs go directly to Cloudflare R2, bypassing anon.li's app servers entirely. That means the encrypted bytes don't transit through the API host at all — they go straight to object storage. Cleaner for them, faster for you.
Step 6: PUT the encrypted bytes
const putRes = await fetch(file.uploadUrls['1'], {
method: 'PUT',
body: encryptedData,
});
if (!putRes.ok) throw new Error(`upload chunk: ${putRes.status}`);
const etag = putRes.headers.get('ETag');
if (!etag) throw new Error('no ETag returned from storage');
Storage hands back an ETag for each chunk. We need it for the finalize step — it's how we prove that what we uploaded matches what the storage backend received.
Step 7: finalize
const finishRes = await fetch(`${API}/drop/${dropId}?action=finish`, {
method: 'PATCH',
headers: { ...auth, 'Content-Type': 'application/json' },
body: JSON.stringify({
files: [{
fileId: file.fileId,
chunks: [{ chunkIndex: 0, etag }],
}],
}),
});
if (!finishRes.ok) throw new Error(`finish: ${finishRes.status}`);
This commits the multipart upload on the storage side and marks the drop as ready for download. Without this step the drop is in "uploading" limbo and the share URL won't work.
Step 8: hand back the share URL
const shareUrl = `https://anon.li/d/${dropId}#${b64u(key)}`;
return shareUrl;
}
The fragment is the base64url-encoded raw key. The browser will keep it client-side. The server side of anon.li will only ever see dropId.
Glue it together
const url = await uploadFile(process.argv[2], { expiry: 3 });
console.log(url);
$ node share.js ./report.pdf
https://anon.li/d/abc123xyz#nT7l3K9Q...
Done. Send the link, recipient clicks, file decrypts in their browser. Three days later, the bytes are gone.
Things I got wrong the first time
A few stumbling blocks that bit me when I wrote this for real:
Forgetting the auth tag. GCM's auth tag is 16 bytes appended after the ciphertext. If you use cipher.update + cipher.final and forget cipher.getAuthTag(), the receiving end can't verify integrity and decryption fails. Don't separate them — bundle as ciphertext || tag always.
Wrong IV byte order. The chunk index is big-endian in the last 4 bytes of the IV. I wrote writeUInt32LE once and got OperationError: decryption failed and spent fifteen minutes blaming the API. Read your Buffer.write* docs.
Not reading the fragment correctly on the receive side. If you build a Node receiver, new URL(shareUrl).hash includes the leading #. Strip it with .slice(1) before base64-decoding. Browsers do the same with location.hash.
Using chunkSize of the plaintext. The API wants the size on the wire, which is plaintext + 16-byte tag per chunk. For our single-chunk case that's encryptedData.length. Get this wrong and either the upload fails or the receiver chunks the download incorrectly.
What I'd add next
This script is the floor, not the ceiling. Real things you'd want:
-
Streaming for big files. Don't
readFilea 4 GB file. UsecreateReadStream, encrypt chunk-by-chunk, PUT each chunk as it's ready. Drop's chunk format is designed for this — the IV derivation gives you an explicitchunkIndexto count against. - Password protection. Drop supports wrapping the file key under an Argon2id-derived password key. The receiver enters the password, derives the wrapping key locally, unwraps the file key, and decrypts. You never send the password.
-
Owner-key recovery. If you want the dashboard to be able to see your own files later (just for management — it still can't decrypt the contents), you can wrap the file key under an account-level vault key and pass
ownerKeyon creation. Optional, lets you delete files from the dashboard. -
Encrypted metadata. Stick a JSON blob (description, tags, source app version) in
encryptedMessageso receivers see context after decryption.
Each of those is its own short post. The scaffold above gets you to the point where you can ship E2EE file sharing without a backend you control. For a CLI tool, that's an absurd amount of capability for ~150 lines.
Source for the full script (and the corresponding download/decrypt counterpart) lives in the Drop API docs. Steal it.
Top comments (0)