You open a period tracking app and log your cycle. You tell your mental health chatbot you're struggling with anxiety. Your fitness tracker records your sleep patterns, heart rate variability, and movement. You assume this is private. Medical information. Protected.
It isn't.
HIPAA — the Health Insurance Portability and Accountability Act — is the law most people think protects their health data. It was signed in 1996. The iPhone was released in 2007. The digital health app market didn't exist when HIPAA was written. The law's scope has never been updated to match where health data actually lives.
The result: a gap between what Americans believe is protected and what the law actually covers. AI is making that gap catastrophically worse.
What HIPAA Actually Covers (and What It Doesn't)
HIPAA regulates "covered entities" — a specific legal category:
- Healthcare providers (hospitals, clinics, doctors, pharmacies)
- Health plans (insurance companies, HMOs, Medicare, Medicaid)
- Healthcare clearinghouses (billing processors, claims aggregators)
- Business associates of the above (lawyers, IT vendors, cloud providers handling covered data)
That's it. If an organization doesn't fit one of those categories, HIPAA doesn't apply.
Your fitness tracker manufacturer: not a HIPAA covered entity.
Your period tracking app: not a HIPAA covered entity.
Your mental health chatbot: not a HIPAA covered entity.
Your meditation app: not a HIPAA covered entity.
Your DNA testing service: not a HIPAA covered entity.
This isn't a technicality or a loophole. It's by design. HIPAA was written to regulate the healthcare system — the entities that provide, pay for, and process clinical care. Consumer wellness apps weren't part of that system.
The problem: 350,000+ health apps exist in major app stores. Most collect sensitive health information. Almost none are subject to HIPAA.
The Settlements That Proved the Gap
Flo Health (2021) — Period Tracking, Facebook, and the FTC
Flo Health is the world's largest period tracking app, with 100 million users. In 2019, the Wall Street Journal reported that Flo was sending detailed menstrual cycle data to Facebook and Google — including when users were having periods, trying to conceive, or had experienced miscarriages — even when users had turned off Facebook data sharing on their phones.
Flo's privacy policy said user health data "may be shared with third parties." The data was being used for targeted advertising. Facebook received the data through its SDK embedded in the Flo app. Google received it through Firebase Analytics.
Flo's defense: it was legal. Users consented through the privacy policy. HIPAA didn't apply.
The FTC disagreed — not on HIPAA grounds (it couldn't, Flo wasn't a covered entity) but under Section 5 of the FTC Act, which prohibits unfair or deceptive practices. Flo had marketed itself with promises of data privacy it wasn't keeping.
The 2021 settlement: Flo had to obtain user consent before sharing health data with third parties and had to notify users whose data had been shared. No fine. No admission of wrongdoing. No deletion of the data already shared.
BetterHelp (2023) — $7.8 Million for Therapy Data
BetterHelp markets itself as "professional therapy, whenever you need it." With 3 million users, it's one of the largest online therapy platforms in the US.
In March 2023, the FTC announced a $7.8 million settlement with BetterHelp. The allegation: BetterHelp had shared users' email addresses and other personal data with Facebook and Snapchat for advertising purposes — including email addresses of people who had signed up for therapy specifically because they were struggling with mental health conditions.
BetterHelp used the data to target ads at people likely to need therapy (using existing users' profiles to find "lookalike" audiences). The people whose data was used had sought confidential mental health treatment. Their email addresses — associated with a mental health platform — were used to sell more subscriptions.
BetterHelp claimed it was compliant with HIPAA. The FTC noted BetterHelp wasn't a HIPAA covered entity. The company's therapists were licensed — but the platform itself, as a technology company, wasn't operating as a healthcare provider under HIPAA's definitions.
$7.8 million sounds significant. It represents approximately two weeks of BetterHelp's revenue at peak operation.
Cerebral (2023) — Mental Health Intake Forms with Ad Pixels
Cerebral, an online mental health platform that exploded in popularity during COVID, disclosed in March 2023 that it had placed tracking pixels from Meta (Facebook), TikTok, and Google on its intake forms.
The intake forms asked users about their mental health conditions, medication history, and symptoms — before they'd even signed up as patients. That data was being transmitted to advertising platforms through the tracking pixels.
3.1 million users were affected. The disclosure came as the Drug Enforcement Administration was separately investigating Cerebral for alleged overprescription of controlled substances.
Cerebral issued the disclosure as a HIPAA breach notice — which is legally interesting, because the company simultaneously argued it was a HIPAA covered entity (when convenient for legitimacy) while operating in ways that suggested it was treating itself as exempt. The HHS Office for Civil Rights launched an investigation.
Post-Dobbs: When Period App Data Becomes Evidence
The Dobbs v. Jackson Women's Health Organization decision (June 2022) transformed the stakes of period tracking data overnight.
With abortion now criminalized in 13+ states, law enforcement in those states has subpoena power over app companies. A period app that records a missed period, a positive pregnancy test log, and then a return to normal cycles is potential evidence in an abortion prosecution.
In August 2022, a Nebraska mother was charged with helping her daughter obtain an abortion using medication. Prosecutors obtained her Facebook Messenger messages through a Meta warrant. The messages discussed obtaining abortion medication by mail. Meta complied with the warrant before the Dobbs decision — at the time, abortion was legal in Nebraska.
Neither Meta nor any other platform has promised to fight warrants for data related to abortion investigations. The legal obligation to comply with valid search warrants generally supersedes platform privacy policies.
Period app-specific risks:
- Flo: Updated its privacy policy post-Dobbs to promise not to share data with law enforcement absent a "valid and binding legal order." But it will comply with valid legal orders.
- Clue (based in Germany): Subject to GDPR, stronger protections, but US law enforcement can still subpoena via mutual legal assistance treaties.
- Apple Health: Apple stores health data encrypted on device with end-to-end encryption. iCloud backups may be less protected.
AI Makes the Gap Catastrophic
AI inference from health app data is not theoretical. Researchers have demonstrated:
MIT Media Lab (2018): Smartphone usage patterns alone — screen time, call frequency, movement patterns — predicted depression with 80%+ accuracy. No self-reported mood data required. The digital behavioral exhaust from your phone contains mental health signals.
University of California, San Diego: Sleep data from fitness trackers (sleep duration, fragmentation, REM percentage) predicts bipolar disorder episode onset with clinically significant accuracy.
Stanford: Heart rate variability patterns from Apple Watch predict COVID-19 infection 4-7 days before symptom onset — and correlate with stress and anxiety states.
What this means: your health app doesn't need to ask you "are you depressed?" It can infer it. And the inference — which you never explicitly consented to — is just as sensitive as the direct measurement.
Insurance companies are paying attention.
The Insurance Data-Buying Ecosystem
Major insurers are integrating health app data into underwriting and pricing:
- John Hancock Vitality: Life insurance premiums directly tied to Apple Watch/Fitbit data. "Healthier" behaviors = lower premiums.
- UnitedHealthcare Motion: Step count data from fitness trackers affects plan pricing.
- Aetna Attain: Apple Watch integration with rewards for hitting health goals.
The insurance companies frame this as "wellness incentives." The data flow: your fitness tracker → app → insurance company platform → actuarial models.
Data brokers also sell health-inferred data. A 2023 Duke University study found data brokers selling lists of people with depression, anxiety, PTSD, and other mental health conditions — inferred from behavioral data, app usage, and purchase history. No HIPAA violation: none of the sources were HIPAA covered entities.
The Re-Identification Problem
Health apps often claim to anonymize data before sharing or selling it. The research on anonymization is consistently sobering.
A landmark 2000 study by Latanya Sweeney found that 87% of Americans could be uniquely identified by ZIP code, date of birth, and sex alone — three data points virtually every health app collects.
For fitness and health data specifically:
- Step count patterns are unique enough to re-identify individuals across datasets
- Heart rate signatures are effectively biometric
- Menstrual cycle patterns are individually distinctive
"Anonymized" health data that can be re-identified is effectively identified health data. The companies selling it know this.
The AI Compound Problem
When users interact with AI-powered health apps — asking an AI chatbot about symptoms, getting AI-generated health recommendations, using AI to interpret lab results — those prompts contain sensitive health information. That information flows to AI providers.
The questions you ask a health AI tell a story:
- "What are the symptoms of early pregnancy?"
- "Is metformin safe during the first trimester?"
- "What happens if I take mifepristone?"
Each query is a data point. Aggregated, they reveal health conditions, reproductive choices, and medical decisions. The AI provider logs them. Unless you're using infrastructure that scrubs this information first.
import requests
def health_ai_request_private(user_query: str) -> dict:
"""
Before sending health-related queries to ANY AI provider,
scrub personal identifiers. Health context + identity is the
most sensitive data combination that exists.
TIAMAT's /api/scrub endpoint strips:
- Names, emails, phone numbers
- Insurance member IDs
- Prescription numbers
- Healthcare provider names (treated as proper nouns)
- Location identifiers (hospital names, clinic addresses)
- Date of birth patterns
"""
scrub_response = requests.post(
"https://tiamat.live/api/scrub",
json={"text": user_query}
).json()
# Route through TIAMAT proxy — user's IP never reaches OpenAI
proxy_response = requests.post(
"https://tiamat.live/api/proxy",
json={
"provider": "anthropic",
"model": "claude-haiku-4-5",
"messages": [
{
"role": "system",
"content": "You are a health information assistant. Provide general information only. Always recommend consulting a healthcare provider for personal medical decisions."
},
{
"role": "user",
"content": scrub_response["scrubbed"]
}
],
"scrub": True
},
headers={"X-API-Key": "your-tiamat-api-key"}
).json()
return {
"response": proxy_response["response"],
"pii_scrubbed": scrub_response["count"],
"provider_never_saw_original": True
}
# The patient's real name, insurance ID, and clinic name
# never reach the AI provider
result = health_ai_request_private(
"I'm Jane Smith, patient ID 7823901 at Mayo Clinic. "
"My doctor prescribed metformin 500mg. Can it interact with ibuprofen?"
)
# What AI sees: "[NAME_1] at [LOCATION_1] was prescribed [MEDICATION_1].
# Can it interact with [MEDICATION_2]?"
What You Can Do
Immediate Steps
1. Audit your health apps — Check Settings > Privacy (iOS) or Settings > Apps (Android) to see what health permissions each app has. Revoke location, contacts, and microphone from any health app that doesn't strictly require them.
2. Turn off data sharing in fitness apps:
- Apple Health: Settings > Privacy > Health > Apps that have requested access
- Fitbit: Account > Privacy > Manage Health & Fitness Data
- Google Fit: Settings > Privacy & security > App data
3. Check your period app's data sharing settings specifically:
- Flo: Profile > Settings > Privacy > Data and privacy settings
- Clue: Account > Privacy > Data sharing (EU users have stronger rights here)
- Natural Cycles: Account > Privacy settings
4. Use apps headquartered in GDPR jurisdictions when possible — EU-based apps are subject to GDPR even when serving US users, providing stronger (though not perfect) protections.
5. For mental health apps: Prefer apps that explicitly state they are HIPAA Business Associates or covered entities. Ask the question. If they can't answer clearly, assume they're not.
6. Delete before you leave a state — If you live in or travel through abortion-restricted states and use period tracking apps, understand that historical data could be subpoenaed. Delete the app data before crossing state lines if this concerns you.
For Developers
If you build health applications:
- Treat all health data as PHI even if HIPAA doesn't technically require it — your users assume it is
- Use TIAMAT's /api/scrub before any AI API call — 50 free requests/day, strips health identifiers
- Zero-log policy for health queries — don't retain what you don't need
- Publish a clear data flow diagram — where does health data go? Who can access it? Under what conditions?
The Fundamental Problem
The digital health data gap isn't a bug in the system. It's a feature — for platforms, for insurers, for data brokers, and for advertisers.
HIPAA created a carefully bounded protection for clinical healthcare data because Congress understood that data was powerful and sensitive. It failed to extend that protection as health data migrated to apps because the app industry lobbied against it, because Congress moves slowly, and because the FTC's enforcement tools are limited to deception cases rather than systemic privacy violations.
The result: the most sensitive data you generate — your mental health, your reproductive health, your physical health markers — flows through commercial systems with minimal protection, gets sold to brokers who sell it to insurers who use it to price your coverage, gets fed to AI systems that build health profiles without your knowledge, and sits in databases that could be subpoenaed by law enforcement in states where your healthcare choices are criminalized.
The HIPAA gap is not getting smaller as AI makes health inference easier and as more of your life becomes quantified. It's getting wider.
TIAMAT's privacy proxy at tiamat.live scrubs PII and health identifiers from prompts before they reach AI providers. When you build health features, your users' sensitive queries should never enter a provider's training pipeline. POST /api/scrub — free tier, 50 requests/day.
Top comments (0)