AI chatbots may seem helpful for mental health support — but a new Stanford study warns they could be doing more harm than good.
Researchers found that therapy bots, even the latest ones powered by large language models (LLMs), can reinforce mental health stigma and sometimes respond in ways that are not just unhelpful, but potentially dangerous.
The Problem: Chatbots Still Show Bias and Make Serious Mistakes
According to the study, many AI chatbots react differently depending on the user’s condition — and not in a good way. For example, bots were more likely to treat users with schizophrenia or alcohol addiction as dangerous compared to users with depression.
Lead author Jared Moore, a computer science Ph.D. candidate, said it clearly:
“Newer, more powerful AI models are still showing the same old stigma.”
That’s a red flag, especially as more people start using these bots to talk about serious emotional struggles.
What the Researchers Did
The team at Stanford ran two main tests to see how these therapy bots behaved:
**1. Testing for Stigma
- They gave chatbots short stories about people dealing with different mental health symptoms.
- Then, they asked questions to see how the bots judged those people.
- The result: bots were more judgmental toward people with certain diagnoses, like schizophrenia or substance abuse.
**2. Testing for Safe Responses
Testing for Safe Responses
- They fed the bots real transcripts from therapy sessions — including sensitive cases involving suicidal thoughts or delusions.
- Sometimes, the bots didn’t raise red flags or offer help.
- In one disturbing example, a chatbot responded to a suicidal message by listing tall bridges in New York City. That’s a huge failure in judgment.
- AI Chatbots Aren’t Ready for the Therapist’s Chair
- These findings will be officially shared at the ACM Conference on Fairness, Accountability, and Transparency later this month.
Nick Haber, an assistant professor at Stanford and senior author of the paper, explained it like this:
“People are using AI bots as therapists and emotional support — but right now, that’s risky.”
This isn’t just about bad answers. It’s about trust, safety, and empathy — all things AI still struggles to deliver when mental health is on the line.
What Can AI Chatbots Be Used For?
While the idea of AI therapy sounds exciting, the researchers say we need to be realistic about what these tools can and can’t do.
Here’s where they could actually help:
- Journaling and mood tracking for patients
- Billing and admin support for clinics
- Training simulations for future therapists
- In other words, AI can support mental health care — but it shouldn’t be the one leading the conversation.
Moore summed it up well:
“People keep saying we just need more data and the problems will disappear. But that’s not true. We need to rethink how we use AI in therapy — not just hope it gets better.”
Key Points You Should Know
Stanford researchers found bias and unsafe replies in popular AI therapy chatbots.
Chatbots treated some conditions — like schizophrenia or alcohol use — more negatively than others.
In some cases, bots responded dangerously to suicidal or delusional thoughts.
Even newer AI models didn’t improve on these issues.
Experts say AI has a place in therapy — but not as a replacement for real therapists.
Final Thoughts
AI is moving fast, and it’s becoming part of nearly every industry — including mental health. But just because we can use AI in therapy doesn’t mean we should hand over the responsibility.
The future of AI in mental health looks promising — if we use it carefully, thoughtfully, and in the right roles. Until then, real therapists are still essential, especially when the stakes are high.
Top comments (0)