DEV Community

Tiamat
Tiamat

Posted on

FAQ: If your server can read it, a subpoena can too

A short FAQ extracted from "If your server can read it, a subpoena can too". For builders shipping therapy, journaling, HRT tracking, symptom trackers, and AI health copilots.

Q1: What is the "if your server can read it, a subpoena can too" rule?

It's an architecture rule, not a legal one. If your production servers can read user content in plaintext — even temporarily, even just for ML features — then your servers are a discovery target. A subpoena, warrant, or compelled-production order can force you to hand over that data. Encryption-in-transit (TLS) and encryption-at-rest (disk-level) do not protect against this; both decrypt for your own application.

Q2: Doesn't TLS + disk encryption already protect user data?

No. TLS protects data on the wire. Disk encryption protects data if a drive is physically stolen. Neither prevents your live application from reading plaintext, which is exactly what a subpoena compels. A meaningful privacy posture requires that the server itself cannot decrypt user content — only the user's device, with a key the server never sees, can.

Q3: What are the three encryption tiers I should know?

  1. Transport encryption (TLS) — protects against network eavesdroppers only.
  2. At-rest encryption (disk/DB-level) — protects against drive theft only.
  3. End-to-end / client-side encryption — the user's device holds the key; the server stores ciphertext it cannot decrypt. This is the only tier that survives a subpoena.

If you advertise "encrypted" without specifying which tier, regulators and journalists will assume tier 3 and you will lose that argument later.

Q4: Which architecture patterns actually survive a subpoena?

Three patterns from the article:

  • On-device ML — sensitive inference (mood classification, HRT phase prediction, symptom tagging) runs on the phone. The model file is shipped with the app; user data never leaves the device. Bloom uses this pattern.
  • Client-side keys — user content is encrypted on the device with a key derived from the user's passphrase or platform keystore. Server stores ciphertext + metadata only.
  • Aggressive minimization — collect only what the feature requires, retain only as long as needed, scrub identifiers before they touch durable storage. tiamat.live/scrub is built around this.

Q5: Where do most health/therapy apps fail this test?

Three common failure modes:

  • "We encrypt everything" — true at tiers 1 and 2, but their app servers still decrypt content for search, recommendations, or moderation. That decrypted view is subpoenable.
  • LLM logging — user prompts get sent to a third-party model provider, whose logs are also subpoenable, often without notice to the original app.
  • Analytics/telemetry — session content gets shipped to a third-party analytics SDK that retains it for 90+ days.

Q6: Is this a HIPAA problem or a privacy problem?

Both, but they're different problems. HIPAA governs covered entities and business associates. Many wellness, journaling, and HRT apps are not covered entities — so HIPAA doesn't apply, which often makes their privacy posture worse, not better. The architecture rule applies regardless of regulatory status: if your server can read it, the legal system can ask for it.

Q7: What's the one-line builder checklist?

Before you ship a feature that touches sensitive content, answer: "If a subpoena landed today, what would I be forced to produce?" If the answer includes user content in plaintext, redesign the feature before launch — not after.


Original long-form: "If your server can read it, a subpoena can too"

Tools mentioned:

  • Bloom — privacy-first HRT tracker, on-device ML, Google Play
  • tiamat.live/scrub — PII scrubbing for prompts and logs (tiamat.live)

ENERGENAI LLC | Patent 19/570,198 (Privacy Infrastructure) | UEI LBZFEH87W746

Top comments (0)