DEV Community

CrisisCore-Systems
CrisisCore-Systems

Posted on

Subpoena-Proofing by Design: Why Real Zero-Knowledge Has No Back Door

User-controlled export boundary: WorkSafeBC pain documentation tool

If you want the short Layer 1 path into this argument, read these first:

  1. Keeping Your Health Data Out of Court
  2. If Your Health App Can't Explain Its Encryption, It Doesn't Have Any
  3. Client-Side Encryption for Healthcare Apps

This piece is the bridge between the polemic and the implementation.

If you want privacy-first, offline health tech to exist without surveillance funding it: sponsor the build → https://github.com/sponsors/CrisisCore-Systems

When people say they want software to be subpoena-proof, they usually mean
something too vague to build and too dramatic to defend.

Not invincible.

Not outside the law.

Not protected from a compromised device, a coerced unlock, or a user who
chooses to export their own records.

Something narrower than that, and more real.

It means reducing what the software operator can produce in the first place.

Boundary notes, because truth matters

  • This is not legal advice.
  • This is not a claim of perfect secrecy.
  • This is not a claim that local encryption defeats malware, device seizure after unlock, or physical coercion.

What it is claiming is narrower and more important.

If the architecture keeps plaintext off your servers, keeps keys out of your
custody, and avoids privileged recovery paths, you have materially less to hand
over when somebody comes asking.

That is what subpoena-proofing by design means here.

Most subpoenas are custody problems before they are cryptography problems

People love to jump straight to the algorithm.

AES-GCM.
PBKDF2.
Key sizes.
Iteration counts.

Those matter.

But the first question is not which cipher you used.

The first question is who has the data.

If the application server stores plaintext, mirrored exports, recovery keys,
support snapshots, or admin-readable backups, then the argument is basically
over. The system may be encrypted at rest in some narrow operational sense,
but the operator still sits inside the custody chain.

That means the operator can be compelled.

And if the operator can be compelled, the user is not relying on math. They are
relying on policy, promises, and the hope that the organization holding the
data will defend them better than it defends itself.

That is not zero-knowledge.

That is delegated trust with nicer copy.

Zero-knowledge is a custody architecture, not a badge

The phrase gets abused constantly.

Sometimes it means end-to-end encryption.
Sometimes it means encrypted sync.
Sometimes it just means the company wants to sound careful while still keeping a reset path and a support console behind the curtain.

For sensitive personal records, I use a stricter test.

Zero-knowledge means the operator does not hold what would be required to reconstruct the user's plaintext records in ordinary service operation.

That usually implies at least four things.

  1. The user secret is derived locally, not issued from the server.
  2. The encryption key boundary stays on the device, or at minimum outside the operator's unilateral custody.
  3. Plaintext records are not silently mirrored into operator-readable storage for convenience.
  4. There is no privileged "forgot password" or admin recovery path that secretly bypasses the whole story.

Miss any of those and the zero-knowledge claim starts collapsing.

The forgot-password flow is usually the confession

This is where a lot of products accidentally tell on themselves.

If the app can reset the one thing that protects the records, you need to ask what exactly is being reset.

There are only a few real possibilities.

Either:

  • the operator can reissue access to readable data,
  • the operator can re-wrap the stored key material,
  • or the operator never truly surrendered custody in the first place.

That does not automatically make the product malicious.

It does make the phrase zero-knowledge very hard to defend.

In a local-first architecture, the absence of a universal recovery flow is not a UX omission.

It is often the cost of honesty.

If the passphrase-derived key never leaves the device and the operator does not keep a second route around it, then some forms of recovery stop being available.

That is frustrating.

It is also what makes the custody boundary real.

Local derivation changes what the operator can disclose

This is where the cryptography starts to matter.

If the user secret is stretched locally into a key and that key is used to unwrap or derive local encryption capability, the operator is no longer sitting on a central decrypt button.

At a high level, the shape looks like this:

const passphraseMaterial = await crypto.subtle.importKey(
  'raw',
  new TextEncoder().encode(passphrase),
  'PBKDF2',
  false,
  ['deriveKey']
);

const localKey = await crypto.subtle.deriveKey(
  {
    name: 'PBKDF2',
    salt,
    iterations: 310000,
    hash: 'SHA-256',
  },
  passphraseMaterial,
  { name: 'AES-GCM', length: 256 },
  false,
  ['encrypt', 'decrypt']
);
Enter fullscreen mode Exit fullscreen mode

The point of a pattern like this is not that PBKDF2 is magical.

The point is that the derivation happens on the user's side of the boundary.
The operator is not issuing a reusable server-side decrypt capability every
time the account opens.

That changes the disclosure story.

If asked for the records, the operator may be able to produce:

  • application source code,
  • architectural documentation,
  • exported audit evidence the user explicitly chose to share,
  • encrypted blobs if any exist in transmitted or backed-up form,
  • and high-level metadata that was actually retained.

What the operator cannot honestly produce is plaintext it never possessed.

That is the design win.

Subpoena resistance starts with non-possession

This is the part that matters more than the slogan.

A protective system should try to make non-possession structurally true.

Not as a policy preference.

As an architectural fact.

No backend for the core record path.
No silent cloud copy of health notes.
No analytics stream full of intimate behavioral events.
No support bundle that scoops up body logs and free text because debugging was easier that way.
No operator-held recovery secret waiting to become the real product.

Once those surfaces exist, they will eventually be demanded by somebody.

That is why local-first matters here. It is not just a resilience property. It is a disclosure minimization property.

Exports are where zero-knowledge can be quietly undone

A lot of teams get the storage story mostly right and then destroy it at the edge.

They encrypt the vault.
They derive keys locally.
They avoid server custody.

Then they add:

  • a plaintext export emailed through a support workflow,
  • an unencrypted backup written to a cloud folder by default,
  • a PDF artifact that leaks more than the user intended,
  • or a debug mode that bundles sensitive records for troubleshooting.

That is not a small side issue.

That is the boundary reopening.

Zero-knowledge claims only stay credible if the export surface is treated as a separate trust boundary with its own minimization rules, user control, and honest warnings.

If you want the full argument there, read Exports are a security boundary: the moment local-first becomes shareable.

If the boundary cannot be audited, it is not real

There is a simple reason I do not trust zero-knowledge as a homepage adjective.

It is too easy to say.

The only version that matters is the one somebody else can inspect.

That means the system should be able to explain:

  • where plaintext exists,
  • where it does not,
  • where keys are derived,
  • where keys are stored or wrapped,
  • what metadata is retained,
  • what exports look like,
  • what recovery is impossible by design,
  • and what threat model the boundary is actually defending against.

If the answer is "trust us," the answer is no.

If the answer is "here is the code path, the storage model, the failure mode, and the limit," now we are at least speaking honestly.

What this does not protect you from

This boundary is strong for some threats and weak for others.

It helps against operator overreach, central breach surfaces, routine legal demand directed at the service, and the quiet accumulation of extra copies nobody needed.

It does not solve:

  • a compromised operating system,
  • spyware or malicious extensions,
  • a device seized while already unlocked,
  • screenshots or shoulder surfing,
  • a user who exports and shares plaintext,
  • or legal compulsion aimed directly at the person holding the device.

That limitation does not make the architecture fake.

It makes it specific.

Specific is better than theatrical.

The tradeoff is the proof

The technical reality of zero-knowledge is less glamorous than the sales pitch.

It means accepting tradeoffs.

Harder recovery.
Less centralized observability.
Fewer support shortcuts.
More careful export design.
More pressure on documentation to stay truthful.

Good.

Those costs are not a UX accident.

They are exactly what keep the boundary from being imaginary.

If a system wants all the convenience of centralized custody and all the
branding benefits of zero-knowledge at the same time, it is usually hiding the
second path somewhere.

That hidden second path is the whole problem.

The protective test

Protective Computing asks a blunter question than normal security marketing.

When the user is in pain, under administrative pressure, trying to document reality, and one institutional letter could force disclosure, what does the system make possible?

Can the operator read the record?
Can the operator reset the boundary?
Can the operator quietly centralize the data later?
Can the operator explain what would and would not exist if somebody came with paperwork?

If the answer is clear, limited, and technically enforced, the system is earning something.

Not invulnerability.

Legitimacy.

That is the standard.

Not whether the homepage sounds careful.

Whether the custody chain stays narrow when the pressure becomes real.

Read next

If you want the implementation details behind the crypto boundary, read Client-Side Encryption for Healthcare Apps.

If you want the product-level boundary model underneath this argument, read Trust Boundaries in Client-Side Health Apps.

If you want the architectural doctrine that turns this from a feature
decision into a systems rule, read Protective Computing Is Not Privacy
Theater
.


Support this work

Top comments (0)