I've been building Flowvault, an encrypted browser notepad. "Encrypted notes in the browser" isn't new — ProtectedText, Standard Notes, PrivateBin have done it for years. What's unusual is what happens when someone asks for your passphrase.
The threat "zero-knowledge" ignores
Zero-knowledge encryption protects you from a compromised server. It does not protect you from anyone who can compel you to type your passphrase — border searches, subpoenas served on you, a jealous partner over your shoulder.
If you unlock, everything's visible. If you refuse, the encrypted blob itself is suspicious.
VeraCrypt solved this for disks in 2013 with hidden volumes. I wanted the same for a browser notepad, in one URL.
One URL, many notebooks
Every Flowvault URL is a fixed-size ciphertext blob containing 64 equal slots. Each slot holds either one encrypted notebook, or random bytes. From the outside you can't tell which is which — AES-GCM output is indistinguishable from /dev/urandom without the key.
When you type a passphrase, the app:
- Derives a key from it (Argon2id, ~1.5s, 64 MiB).
- Computes which slot that passphrase maps to (keyed HMAC, vault-specific so it can't be precomputed).
- Tries to decrypt that slot.
Tag verifies → that's your notebook. Tag fails → generic "wrong passphrase." The app itself can't tell whether you typoed or whether no notebook exists for that passphrase. Both look identical.
A different passphrase maps to a different slot. You can have 1 notebook or 63 behind the same URL — the blob is the same shape either way. Empty slots are filled with random bytes at creation, so an adversary can't tell "used" from "unused."
The math, written out:
// SLOT_COUNT = 64. Each slot is 8 KiB. Total blob = 512 KiB, always.
async function slotIndexFor(masterKey: Uint8Array): Promise<number> {
const tag = await hkdfDerive(masterKey, utf8Encode("flowvault:slot-index"), 4);
const u = ((tag[0] << 24) | (tag[1] << 16) | (tag[2] << 8) | tag[3]) >>> 0;
return u % SLOT_COUNT;
}
async function openWithKey(blob: Uint8Array, masterKey: Uint8Array) {
const preferred = await slotIndexFor(masterKey);
for (const idx of tryOrder(preferred)) {
const subKey = await hkdfDerive(masterKey, `flowvault:slot:${idx}`, 32);
const frame = await aeadDecrypt(subKey, slotBytes(blob, idx));
if (frame && hasValidMagic(frame)) return { index: idx, content: frame };
}
return null; // wrong passphrase — or no notebook exists for it. Same return.
}
Notice what's not in the code: any branch that distinguishes "typo" from "no notebook exists." Both hit the same return null. There is no runtime difference to observe, nothing to time, nothing to leak.
The hard part: decoy UX
Plausible deniability only works if you have something to hand over. Flowvault lets you set a decoy passphrase pointing to a benign notebook you're happy to surrender under duress.
The unforgiving constraint: the app must never behave differently for a decoy vs a real passphrase. No timing difference. No "switch to your other notebook?" prompt. No unread-count badge that only your real notebook has.
Every UI decision — autosave indicators, recent-activity lists, error messages — has to survive the question "does this let an observer distinguish decoy from real?" This turned out to be the hardest engineering problem in the project, much harder than the crypto.
Beyond hidden volumes
Hidden volumes are the headline, but three other features had their own reasons to exist. Each one scratched an itch I had with what's already out there.
Why I built time-locked notes
"Open this on my kid's 18th birthday." "Release on the day the embargo lifts." "My will, in 30 years."
Every tool that offers this today runs on the honor system — a service holds the file and promises not to peek. If the service is ever breached, subpoenaed, or acquired, the promise is worth nothing.
drand is a distributed randomness beacon that publishes a new public value every few seconds on a fixed schedule. tlock uses identity-based encryption to let you encrypt to a future drand round as if it were a public key. Until that round is published, nobody can decrypt — not the service, not me, not even you. After it, anyone with the ciphertext can.
What the API actually looks like:
import { timelockEncrypt, mainnetClient, roundAt, defaultChainInfo } from "tlock-js";
// User picks a wall-clock date. Convert to the drand round whose
// signature will first be published after that moment.
const round = roundAt(unlockAtMs, defaultChainInfo);
// Encrypt *to that future round*. The drand beacon's eventual
// threshold signature for the round is the decryption key — and
// nobody can produce it before drand publishes.
const capsule = await timelockEncrypt(round, Buffer.from(plaintext), mainnetClient());
Four lines. No key exchange, no trusted third party, no "we promise not to peek until Tuesday." Flowvault wraps this behind a date picker: pick a date, write, share. The note unlocks itself.
Why I built Encrypted Send
People send API keys, wifi passwords, and one-off credentials through Slack and email every single day. Those messages then sit in inboxes forever — indexed, searchable, breachable.
The right primitive is a link that works once and leaves no residue. Bitwarden Send does this well, but it needs an account. Privnote stores the ciphertext keyed by the link itself and has been blocked in several jurisdictions. The rest of the one-time-link sites have varying trust properties and no real way to audit them.
Flowvault Encrypted Send puts the decryption key in the URL fragment (https://…/s/abc#theKeyLivesHere). Browsers never transmit fragments to servers, so the key cannot be logged, subpoenaed, or leaked from a database — it literally never reaches us. First open destroys the ciphertext. No account, no email, no trace.
The split, in code:
async function seal(plaintext: string) {
const key = randomBytes(32);
const ciphertext = await aeadEncrypt(key, utf8Encode(plaintext));
return {
ciphertext, // → POST to server
fragmentKey: toBase64Url(key), // → stays in the URL fragment only
};
}
// Share link:
// https://flowvault.flowdesk.tech/send/<id>#k=<fragmentKey>
//
// Browsers NEVER transmit the #fragment to servers. Firestore,
// Cloud Functions, reverse proxies, access logs — none of them
// will ever see `fragmentKey`.
This is the detail most one-time-link services still get wrong: they store the key (or something derived from it) server-side and call the result "encrypted." Keep the key somewhere the server literally cannot read it — a URL fragment, sent only to the client — and a database breach leaks opaque bytes, full stop.
Why I built .fvault and Markdown export
The first question anyone sane asks about a notes app is: "if you disappear tomorrow, how do I get my data out?"
If the answer is "you can't," they shouldn't trust you. Honestly, they shouldn't trust me either. So Flowvault gives you two escape hatches that work without an account and without contacting anyone.
-
.fvault— a portable encrypted backup. Same ciphertext shape as the server-side blob, wrapped in a JSON envelope. Restore on any Flowvault instance (including your own self-hosted one) or sit on it forever as offline cold storage. Bring-your-own-crypto; the file is opaque to anyone without your passphrase. -
Markdown
.zip— plaintext.mdfiles, one per note, folder-structured. Take it to Obsidian, Notion, a git repo, your fridge. It's yours.
I consider this non-negotiable for a notes app in 2026. You should always be able to leave with your data, no forms, no support tickets, no wait.
Trusted Handover (a client-wrapped digital-inheritance flow) and CAS-based multi-tab writes round out the feature set — fuller writeups on the Flowvault blog. Everything is zero-knowledge, client-side, and works without an account. Your passphrase is the only key. There is no password reset, because there is nobody on the server side who could reset it.
Limits (honest)
Plausible deniability defends against an adversary with your ciphertext plus maybe one passphrase. It does not defend against keyloggers, screen recorders, or someone watching your behaviour over time. Your decoy has to be believable — an empty decoy notebook is worse than no deniability.
Try it
flowvault.flowdesk.tech — open in a browser, no signup.
Source: github.com/flowdeskadmin/flowvault · MIT licensed · self-hostable.
Full design doc with threat-model comparisons to VeraCrypt, ProtectedText, and Standard Notes: plausible-deniability-hidden-volumes-explained.
If you spot a hole in the design, tell me — the only way to find flaws in a crypto scheme is for people smarter than me to look at it.
Top comments (2)
Author here. One attack I don't have a clean answer for: a browser extension
running in the same origin as Flowvault can in principle read the DOM and
fingerprint which notebook is currently open. CSP helps against remote injection
but not against a user-installed extension the adversary controls or coerces.
My current guidance is "don't open your vault in a browser profile with
untrusted extensions," which is a lousy answer for a product whose whole premise
is defending against coercion. If you've thought about DOM-level side channels
for zero-knowledge web apps, I'd genuinely love to know the state of the art.
Related but from a different angle: the other place I don't fully trust my own
work is the UX invariant that every interaction has to behave identically for
a real vs decoy notebook. Autosave cadence, recent-files memos, deletion
animations, focus rings, offline indicators, idle timeouts — any of them can
leak the existence of a second notebook, and "good UX" is usually the enemy of
deniability.
The extension question above is at the DOM level. This one's at the
interaction-design level, and I suspect it's actually the half that breaks
first in practice. If anyone's shipped a timing-safe or traffic-analysis-
resistant UI before — or just thought of an interaction class I've probably
missed — I'd like to hear it.