DEV Community

Custodia-Admin
Custodia-Admin

Posted on • Originally published at app.custodia-privacy.com

GDPR for Mental Health Apps and Therapy Platforms: The Highest-Risk Data in Tech

Mental health apps and therapy platforms collect the most intimate personal data in existence. Not just names and email addresses — but therapy session notes, psychiatric diagnoses, medication histories, crisis disclosures, mood tracking data, and records of suicidal ideation. A breach of this data doesn't just embarrass a user. It can cost them their job, their insurance, their relationships, their sense of safety. It can — in the most serious cases — cost them their life.

This is why GDPR treats health data differently from every other category of personal information. And it's why regulators across Europe are paying increasingly close attention to the mental health technology sector.

If you're building or operating a mental health app, a therapy marketplace, an EAP (employee assistance programme) platform, or any product that touches psychological wellbeing data, this guide covers everything you need to know about GDPR compliance — and why getting it wrong carries consequences far beyond a regulatory fine.

Why Mental Health Data Is Special Category Data Under GDPR

Article 9 of GDPR identifies eight categories of personal data that require elevated protection. Mental health data sits squarely within the "health data" category — defined as "data concerning a person's physical or mental health, including the provision of health care services, which reveal information about their health status."

That means therapy session notes, diagnoses, treatment plans, prescriptions, symptom trackers, mood journals, crisis logs, and any other data that reveals information about a person's mental health status is subject to the strictest protections GDPR offers.

The practical implication: you cannot process special category health data unless you satisfy both a standard lawful basis under Article 6 AND a separate condition under Article 9(2). The Article 9 conditions are far more demanding than the Article 6 bases. Legitimate interest — the catch-all that many tech companies rely on — is not available for special category data.

The Explicit Consent Problem in Therapeutic Relationships

The most obvious Article 9 condition for mental health apps is explicit consent (Article 9(2)(a)). But this is where the compliance story gets genuinely complicated.

GDPR requires that consent be "freely given, specific, informed, and unambiguous." It must be an active opt-in — no pre-ticked boxes, no implied agreement. Users must be able to withdraw consent as easily as they gave it. And critically: consent is not freely given when there is a "clear imbalance of power" between the controller and the data subject.

Now consider the practical context of a therapy app. A person is often in significant distress when they sign up. They may be experiencing a mental health crisis. They are in a vulnerable state, and they need the service — making genuine "freedom" to refuse or withhold consent deeply questionable. The ICO and other data protection authorities have noted that consent in healthcare contexts often fails the "freely given" test precisely because of this power imbalance and the patient's dependence on the service.

This doesn't mean consent is never appropriate for mental health platforms. It means you must think carefully about whether it's the right lawful basis for each processing activity — and have a credible answer for why the consent you're collecting is genuinely free, not coerced by necessity.

Lawful Bases: What Actually Works for Mental Health Data

Beyond explicit consent, several Article 9(2) conditions are relevant to mental health platforms:

Substantial public interest (Article 9(2)(g)): Mental health services often fall within substantial public interest in health and social care. In the UK, Schedule 1 of the Data Protection Act 2018 sets out specific conditions — including processing for healthcare purposes and for mental health services — that qualify under this provision. This is often the most defensible basis for core therapeutic processing.

Healthcare professionals (Article 9(2)(h)): Processing "for the purposes of preventive or occupational medicine, for the assessment of the working capacity of the employee, medical diagnosis, the provision of health or social care or treatment" by a health professional or under their responsibility. If your platform employs licensed therapists or operates under clinical supervision, this condition is directly relevant.

Vital interests (Article 9(2)(c)): In genuine crisis situations — where a user discloses suicidal intent, for example — vital interests can justify processing. But this is a narrow, last-resort basis. It applies when the data subject is physically or legally incapable of giving consent and there is an immediate threat to life. It's not a catch-all for handling sensitive data because it's convenient.

Legal claims (Article 9(2)(f)): Processing necessary for establishing, exercising, or defending legal claims. Relevant for record-keeping in clinical contexts, but not a primary basis.

For most mental health apps, the honest answer is that you'll need a combination of bases — explicit consent for some processing activities, substantial public interest or healthcare professional conditions for clinical functions, and potentially vital interests as an emergency fallback.

Data Minimisation: The Genuine Tension in Mental Health Tech

GDPR's data minimisation principle requires you to collect only what is "adequate, relevant and limited to what is necessary." For most apps, this is straightforward: collect less data.

Mental health platforms face a genuine tension here. More data often means better care. A comprehensive mood history helps a therapist understand patterns. Longitudinal symptom tracking improves clinical outcomes. Session notes documenting exactly what a client said enable continuity of care. Data minimisation taken to its logical extreme would mean providing worse mental health support.

The GDPR answer isn't to collect no data — it's to collect only what is genuinely necessary for each specific purpose and to be able to justify that necessity. In practice this means:

  • Distinguishing between data required for clinical care and data collected for product improvement or research
  • Applying purpose limitation strictly — not allowing clinical data to be repurposed for marketing analytics
  • Reviewing every data point you collect and documenting why it's necessary
  • Implementing strong retention schedules — session notes don't need to be held forever

The key phrase in GDPR's minimisation principle is "limited to what is necessary in relation to the purposes for which they are processed." If the purpose is mental healthcare, necessary data may be extensive. If the purpose is newsletter personalisation, it should be minimal.

Therapist-Client Privilege and GDPR

Many jurisdictions recognise some form of therapist-client privilege — a legal protection preventing therapists from being compelled to disclose therapy session content. GDPR doesn't eliminate this protection, but it operates alongside it in complex ways.

Under GDPR, data subjects have rights — including the right of access (Article 15), the right to rectification (Article 16), and the right to erasure (Article 17). These rights apply to mental health records in principle, but with important exceptions.

Right of access: Patients generally have the right to access their health records, including therapy notes. However, member states can restrict this right where access would be likely to cause serious harm to the data subject (for example, if seeing a raw session note might trigger a mental health crisis). In the UK, health regulations allow healthcare providers to withhold access in specific circumstances.

Right to erasure: Patients can request deletion of their data. But this right doesn't override legitimate processing — if records must be retained for clinical or legal reasons, erasure can be refused. Document your retention justifications clearly.

Rectification: If a client believes a therapy note is factually incorrect, they can request correction. In clinical practice, this is handled through professional record-keeping standards.

Your platform needs a clear policy on handling data subject rights requests specifically in the context of clinical records — not just the generic "delete my data" DSAR workflow.

Sharing with Insurers and Employers: Highly Restricted Territory

Two of the most sensitive data flows in mental health technology involve insurers and employers. Both warrant extreme caution.

Insurance companies: Many mental health apps offer coverage integrations or are funded through insurance. Sharing diagnostic or treatment data with insurers is highly restricted under GDPR. Insurers are not healthcare providers — they're commercial entities with financial interests in understanding your health status. Processing health data for insurance purposes requires specific legal basis and explicit Article 9(2) conditions. Blanket sharing of session content or clinical data with insurers without clear, specific, freely given consent is almost certainly unlawful.

Employers: Employee assistance programmes (EAPs) sit in particularly fraught territory. An employee might access an EAP mental health app that is funded and managed by their employer. The employer typically should receive only aggregate, anonymised data — usage statistics, not individual session content. But the power dynamics are severe: an employee who knows their employer might access their therapy data cannot give genuinely free consent.

GDPR is clear: processing employee health data requires either explicit consent (which is questionable given the employment power imbalance) or substantial public interest/healthcare professional conditions. Sharing individual employees' mental health data with their employer — outside of specific occupational health contexts with proper safeguards — is extremely high risk.

Build your EAP product with strict data separation between individual clinical data and aggregate employer reporting. Make the firewall explicit, contractual, and technical — not just a policy promise.

Crisis Data and Duty of Care

What happens when a user discloses suicidal intent in a therapy session, a chat interface, or a mood tracker?

Mental health professionals have legal duties of care that may require them to break confidentiality — notifying emergency services, alerting next of kin, or facilitating emergency intervention. These obligations exist in law across most jurisdictions and don't disappear because technology mediates the therapeutic relationship.

GDPR accommodates this through the vital interests basis (Article 9(2)(c)) and, in many member states, through specific health and social care provisions. But having the right legal basis isn't sufficient on its own — you need clear operational protocols:

  • What triggers a crisis response in your platform?
  • Who is responsible for that response?
  • What data is shared with emergency services or next of kin, and on what basis?
  • How is this disclosed to users in advance (even if consent for it cannot be withheld)?
  • What records do you keep of crisis interventions?

Regulators and courts will not accept "we didn't know the duty of care applied to us because we're a tech company" as a defence. If your platform facilitates therapeutic interactions, you inherit some of the obligations of therapeutic relationships.

Children's Mental Health Data: Double Protection

Children's mental health data receives double protection under GDPR: the special category protections for health data, plus the additional safeguards for children's data generally.

Consent: Under GDPR, member states can set the age below which parental consent is required for online services. The UK sets this at 13; most EU states set it at 16. Below the applicable age threshold, a parent or guardian must consent on behalf of the child — but this creates complications in therapeutic contexts, where a young person may be seeking support precisely because of family difficulties.

Age verification: You must make reasonable efforts to verify age. Regulators have been clear that this means more than a checkbox asking "are you over 16?"

Content and design: The UK ICO's Children's Code (Age Appropriate Design Code) requires services likely to be accessed by children to apply additional protections — including data minimisation, high privacy defaults, and prohibitions on using children's data in ways detrimental to their wellbeing. A mental health app accessed by teenagers is squarely within this framework.

Parental access vs. therapeutic confidentiality: One of the most difficult questions in adolescent mental health is when a therapist or platform should allow parents to access a young person's records. This is a clinical and ethical question as much as a legal one, and your platform needs a clear policy developed with clinical expertise.

Third-Party Integrations: Video Platforms, Payment Processors, and Their DPIA Implications

Virtually every mental health app uses third-party services:

  • Video call platforms (Zoom, Teams, Google Meet) for teletherapy sessions
  • Payment processors (Stripe, Braintree) for subscription or session fees
  • Booking and scheduling tools (Calendly, Acuity) that see client names and appointment data
  • Analytics platforms that may capture behavioural patterns in a sensitive health context
  • Push notification services that may see message content
  • AI processing services that may process session transcripts or notes

Every one of these is a data processor relationship requiring a Data Processing Agreement (DPA) under Article 28. But more importantly, when third-party services touch health data or operate in the context of health services, you need to assess whether their default data handling is appropriate for sensitive health contexts.

Zoom, for example, has specific HIPAA-compliant configuration options. Standard Zoom — used without those configurations — may not be appropriate for processing mental health session data under GDPR's health data provisions. Similarly, standard Google Analytics — which sends data to US servers — requires careful assessment under post-Schrems II data transfer rules.

This is exactly the territory where a Data Protection Impact Assessment (DPIA) is mandatory. Under GDPR Article 35, DPIAs are required when processing "is likely to result in a high risk to the rights and freedoms of natural persons." Processing special category health data at scale in a mental health context meets this threshold without question.

Your DPIA must cover:

  • What data each third party receives and why
  • Whether DPAs are in place
  • International data transfer mechanisms (Standard Contractual Clauses if data leaves the EEA)
  • The risk to data subjects if a third party is breached
  • Technical and organisational measures to mitigate those risks

Don't treat the DPIA as a compliance checkbox. In mental health tech, it's one of your most important risk management tools.

Data Breach Consequences: Why Mental Health Platforms Are Different

Under GDPR Article 33, you must notify your supervisory authority within 72 hours of becoming aware of a personal data breach that is likely to result in a risk to individuals. If the breach is likely to result in "high risk" to individuals, you must also notify those individuals directly (Article 34).

For most apps, a breach notification is embarrassing and operationally disruptive. For a mental health platform, a breach is potentially catastrophic for the people whose data is exposed:

  • Therapy session notes could be used for blackmail
  • Diagnoses could affect employment decisions, custody cases, or insurance coverage
  • Crisis disclosures (including details of suicide attempts or self-harm) could be used to harm or manipulate a vulnerable person
  • The mere fact of being a mental health platform user carries stigma that could affect a person's professional and personal life

This means your breach response plan must account for the specific nature of mental health data. Notification letters that minimise the harm — "we take your privacy seriously and have secured the issue" — are inadequate when the disclosed data could lead to someone losing their job or being discriminated against.

You also need to think carefully about the threshold for notifying affected individuals. Regulators expect mental health platforms to err on the side of notification given the elevated risk.

ICO Guidance for Health and Social Care

The UK ICO has published specific guidance on data protection in health and social care contexts. Key points relevant to mental health platforms:

The data protection principles apply with particular force: The ICO expects health and social care organisations to apply data minimisation, purpose limitation, and accuracy requirements rigorously — not treat them as aspirational.

Transparency is essential: Patients and clients must understand what data is collected, why, how long it's kept, who it's shared with, and what rights they have. Privacy notices for mental health platforms need to be genuinely clear — not 40 pages of legal text — because the people reading them may be in distress and have limited bandwidth for complex information.

Security must match the risk: The ICO expects security measures proportionate to the sensitivity of data being processed. For mental health data, this means strong encryption (at rest and in transit), strict access controls, staff training, and regular security assessments.

DPOs: If your mental health platform involves large-scale processing of health data, you are required to appoint a Data Protection Officer (Article 37). This is not optional. A DPO must have professional expertise in data protection law and practice.

The ICO has enforcement powers and actively investigates health data breaches. Following an ICO investigation into a health app, enforcement action — including fines and enforcement notices — can follow quickly.

Practical Steps for Mental Health App Founders

Getting GDPR right for a mental health platform is more demanding than for a standard SaaS product. Here's where to start:

1. Map every data point you collect — and for each one, document the purpose, the legal basis (both Article 6 and Article 9(2) where applicable), how long you retain it, and who has access to it.

2. Conduct a DPIA before you launch or before adding significant new features — this is legally required, not optional.

3. Appoint a DPO if required — if you process special category health data at scale, you need one.

4. Audit every third-party integration — check that DPAs are in place, review data transfer mechanisms, and assess whether each service is appropriate for use in a health data context.

5. Write a genuine privacy notice — not a template, but a document that accurately describes what your platform does and is understandable to someone in distress.

6. Build crisis protocols — document what happens when a user discloses a crisis, who is responsible, what data is shared, and on what basis.

7. Get legal advice from a specialist in health data law — GDPR in mental health contexts has nuances that general privacy lawyers may not be familiar with.

8. Scan your platform for compliance gaps — technical compliance (what cookies and trackers are actually loading, whether consent is implemented correctly) is distinct from policy compliance and equally important.


Not sure whether your mental health platform's data practices meet GDPR requirements? Start with a free scan at app.custodia-privacy.com/scan. It will show you exactly which third-party trackers are loading, whether your consent implementation is working correctly, and where your highest-risk compliance gaps are. That technical baseline is the foundation for everything else.


This post provides general information about GDPR as it applies to mental health apps and therapy platforms. It does not constitute legal advice. Mental health data law has significant nuances across EU member states and UK law. Consult a qualified data protection specialist with experience in health and social care for advice specific to your platform and jurisdiction.

Top comments (0)