This article describes the encryption model behind MindMapVault — not as a feature list, but as an engineering constraint that shapes every other decision in the system. It assumes familiarity with basic encryption concepts and focuses on boundary decisions, not cryptographic primers.
MindMapVault is not mainly a drawing tool. It is an encryption discipline wrapped in a usable interface.
If this model fails, no amount of UI, performance, or feature depth can compensate. Everything else becomes incidental.
The model is simple to describe and hard to keep clean in real code:
- sensitive content is encrypted on the client
- the backend stores encrypted payloads and encrypted metadata
- plaintext notes and map content are not shipped to the server
That sounds obvious. The hard part is consistency across all features.
Every new capability wants to poke a hole in the model: previews, sharing, search, exports, diagnostics, support tooling. Each one can accidentally reintroduce plaintext handling if you are not careful.
The rule that saved me repeatedly was this:
If a feature needs plaintext, it must be justified at the edge and never become backend default behavior.
In practical terms, that affected route design, payload structures, and attachment workflows. The backend handles version pointers, upload confirmation state, ownership checks, and object metadata. It should not become the place where private thought content is interpreted.
There is also a human reason for this model.
Privacy here is not about hiding wrongdoing. It is about cognitive ownership. Mind maps tend to capture half‑finished ideas, rough drafts, personal plans, and structures that are not yet coherent — and may never be published. Those thoughts should remain under your control unless you explicitly choose otherwise.
This model is stricter than many products, and that raises implementation cost.
I accepted that cost because it is the product promise.
Everything else in this series is connected to this chapter: backend choices, UI speed decisions, coding rules, and even business trade-offs. If the encryption model becomes optional, the project loses its identity.
For the full security model write-up, see the whitepaper: Security Whitepaper
Concrete modules and libraries used
On the TypeScript side, the encryption stack is explicit and modular, not hidden in one giant helper:
- Web Crypto API (
crypto.subtle) for AES-256-GCM and SHA-256 operations -
@noble/curves(x25519) for the classical ECDH part -
@noble/post-quantum/ml-kem(ml_kem768) for post-quantum KEM -
@noble/hashes(hkdf,sha256) for key derivation and context-separated keys -
hash-wasm(argon2id) for passphrase and master-key derivation paths
In practical terms, this is how I used it.
1) AES-256-GCM envelope (nonce + ciphertext+tag)
const nonce = crypto.getRandomValues(new Uint8Array(12));
const ct = await crypto.subtle.encrypt(
{ name: 'AES-GCM', iv: nonce, tagLength: 128 },
key,
plaintext,
);
This pattern is used in the frontend crypto module to encrypt map payloads, titles, and attachment-related blobs before upload.
2) Hybrid key encapsulation (X25519 + ML-KEM-768)
const classicalShared = x25519.getSharedSecret(ephPrivate, recipientClassicalPub);
const { cipherText: pqCiphertext, sharedSecret: pqShared } =
ml_kem768.encapsulate(recipientPqPub);
const combinedKey = hkdf(sha256, concat(classicalShared, pqShared), undefined, 'crypt-mind-dek-v1', 32);
I combine classical and post-quantum shared secrets, derive a 32-byte wrapping key via HKDF, and use that to wrap the random DEK used for the actual vault content encryption.
3) Argon2id for passphrase-bound keys
const result = await argon2id({
password,
salt,
parallelism: params.p_cost,
iterations: params.t_cost,
memorySize: params.m_cost,
hashLength: 32,
outputType: 'binary',
});
This is used for deriving high-cost keys from user secrets (for example, master key and share-passphrase flows), so brute-force resistance is parameterized and explicit.
How encrypted sharing works without breaking zero knowledge
This was one of the places where the model could have fallen apart very easily.
Sharing is currently implemented as an encrypted export flow, not as live collaborative editing. That distinction matters. The backend can distribute a protected snapshot, but it still should not learn the map content or the share passphrase.
The rough flow looks like this:
Owner browser
|
| 1. Serialize current vault snapshot { title, tree, exported_at, source_vault_id }
| 2. Generate random salt
| 3. Derive share key with Argon2id(passphrase, salt, params)
| 4. Encrypt snapshot with AES-256-GCM using that derived share key
| 5. Upload ciphertext + encryption_meta + checksum + expiry/hint metadata
v
Backend / object storage
|
| Stores only:
| - encrypted blob
| - kdf metadata (salt, memory, iterations, parallelism)
| - checksum, content type, expiry, hint, share id
| Never stores:
| - plaintext map
| - plaintext notes
| - share passphrase
| - decrypted attachments
v
Recipient browser
|
| 6. Fetch encrypted blob + encryption_meta by share URL
| 7. User enters passphrase locally in the browser
| 8. Derive the same share key locally with Argon2id
| 9. Decrypt locally with AES-256-GCM
v
Readable shared vault
That is the important boundary: the server can identify a share, enforce expiry and revocation, and serve encrypted bytes. It cannot unlock the content because it never receives the passphrase-derived key.
The actual frontend share creation code follows that model directly:
const shareBundle = await createEncryptedShareBundle(
{
title,
tree: currentTree,
exported_at: new Date().toISOString(),
source_vault_id: id,
include_attachments: draft.includeAttachments,
},
draft.passphrase,
);
Under the hood, that bundle creation does three essential things:
- derives a share key from the recipient passphrase using Argon2id
- encrypts the exported vault snapshot with AES-256-GCM
- returns only ciphertext plus
encryptionMetaneeded for local unlock later
The backend then stores the encrypted blob and metadata like this:
await encryptedVaultApi.createShare(id, {
name: draft.name.trim() || `${title || 'vault'}.cmvshare`,
scope: 'map',
include_attachments: draft.includeAttachments,
passphrase_hint: draft.passphraseHint.trim() || undefined,
expires_at: expiresAt,
content_type: 'application/vnd.cryptmind.share+json',
size_bytes: shareBundle.ciphertext.byteLength,
encryption_meta: shareBundle.encryptionMeta,
});
Notice what is missing there: no plaintext tree, no raw title/notes payload, and no passphrase.
Attachments follow the same rule
Attachments are where many “private” products quietly cheat.
In MindMapVault, if a share includes files, the owner-side client first downloads the already encrypted attachment, decrypts it locally with the owner’s session key, and then immediately re-encrypts that plaintext with the share key before upload. That means the shared attachment is not the original vault attachment reused directly. It becomes a separate ciphertext bound to the share export.
That extra step matters for two reasons:
- the recipient only needs the share passphrase, not the owner’s vault keys
- the backend still never needs access to attachment plaintext during sharing
So the sharing model is not “backend mediates decryption.” It is “owner creates a new encrypted package for recipients.” That is a much safer design.
Public link does not mean public plaintext
The share URL itself is public in the same sense that any capability URL is public: if you know the identifier, you can ask the backend for the encrypted bundle. But what you get back is still ciphertext plus KDF metadata and optional hint text.
The recipient page then performs local unlock in the browser:
const ciphertext = await encryptedVaultApi.downloadUrl(share.download_url);
const unlockedBundle = await unlockEncryptedShareBundle(
ciphertext,
passphrase,
share.encryption_meta,
);
Again, the trust boundary is explicit:
- backend verifies that the share exists, is not revoked, and is not expired
- backend returns encrypted bytes
- browser derives the key and decrypts locally
That is how zero knowledge is preserved while still allowing practical sharing.
The main point here is not novelty or cryptographic cleverness. Every primitive used is well‑understood. The value is in how responsibilities are split, keys are scoped, and plaintext is prevented from leaking into default backend paths.
Top comments (0)