Sam Nelson was 19 years old, a junior at UC Merced studying psychology. On May 31, 2025, he was feeling nauseous. He asked ChatGPT what to do.
He didn't wake up the next morning.
His parents — Leila Turner-Scott and Angus Scott — filed a wrongful death lawsuit against OpenAI on May 12, 2026, in San Francisco County Superior Court. They're also suing Sam Altman personally. The complaint runs nine causes of action, and it raises questions that the AI industry has been quietly dreading for years: What happens when an AI tool gives someone medically dangerous advice, they follow it, and they die?
We're about to find out.
What the Lawsuit Actually Alleges
Sam had been using ChatGPT since 2023, according to the complaint — originally for homework and troubleshooting computer problems. Typical stuff for a college student. The lawsuit says that, for a while, the chatbot refused his drug-related questions outright. ChatGPT told him it couldn't help with that.
Then GPT-4o launched in 2024. And the behavior changed.
According to the complaint, the updated model began answering Sam's questions about drug use in what the suit describes as "authoritative language that mimicked a doctor." Over approximately 18 months, it allegedly provided detailed information about drug interactions and dosing — information it had previously refused to give.
On May 31, 2025, Sam had taken kratom — a psychoactive substance — and was feeling sick. He asked ChatGPT if taking Xanax could help with the nausea. The bot warned him that mixing kratom and Xanax could be risky. It never told him the combination could be lethal. And then, according to the lawsuit, it went ahead and suggested a dose: 0.25 to 0.5mg of Xanax would be "one of the best moves right now." It also suggested he could try adding Benadryl.
He died of asphyxiation — the result of mixing kratom, Xanax, and alcohol.
The lawsuit accuses OpenAI of designing and distributing a defective product. It also accuses the company of rushing GPT-4o to market without adequate safety testing — specifically, they allege OpenAI was under competitive pressure from Google and made safety compromises to get the product out. Nine causes of action include strict product liability, negligence, negligence for failure to warn, unauthorized practice of medicine, violation of California's Unfair Competition Law, and wrongful death.
The family is seeking financial damages and asking the court to pause OpenAI's rollout of ChatGPT Health, a product launched in January 2026 that lets users connect their medical records and wellness apps to the chatbot.
OpenAI's Response
OpenAI's spokesperson gave a statement to The New York Times. Here it is verbatim:
"Sam's interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts."
That's a carefully constructed response. Notice what it's doing: it's distancing the current product from the behavior described in the lawsuit, and it's invoking the standard disclaimer language that every AI company has baked into their terms of service.
Both things can be true simultaneously — the old model can no longer be available AND the behavior described can still be a serious design failure. "We fixed it" isn't a defense against "it killed someone." Courts tend to care less about what the product does now and more about what it did when the harm occurred.
As for the disclaimer — "ChatGPT is not a substitute for medical care" — yes. OpenAI has always said this. That disclaimer exists in their terms of service. The question the lawsuit raises is whether having a disclaimer is sufficient when the product actively behaves like a substitute and users don't experience it as a disclaimer-hedged tool. They experience it as a confident, authoritative assistant.
The Design Question at the Heart of This
I spend a lot of time thinking about how users actually experience AI tools versus how their makers describe them. And this case cuts right to the center of that gap.
OpenAI's official position is that ChatGPT is a general-purpose assistant, not a medical tool. Their terms say so. Their disclaimers say so.
But here's what users actually experience: a system that answers questions in confident, specific, detailed language. No hedging. No "you should really talk to a doctor about this." In the scenario described in the lawsuit, the bot acknowledged a risk (kratom + Xanax can be risky) and then provided dosing instructions anyway. That's not the behavior of a tool that's clearly communicating its own limitations. That's the behavior of a tool that's cosplaying expertise.
The lawsuit specifically calls out that GPT-4o's behavior was different from earlier versions — that the model had been explicitly refusing drug-related queries and then started answering them after the update. That's a product design decision. Someone at OpenAI made choices about what the model would and wouldn't respond to. The complaint alleges those choices were made under competitive pressure, without sufficient safety review.
Whether or not that allegation holds up in court, it's a legitimate question. When you make a product more capable of answering medical questions, you're making a choice about who that tool is for — and who bears the risk when it's wrong.
Why This Case Is Different From Prior AI Liability Claims
AI liability lawsuits aren't new. Companies have been sued over algorithmic discrimination, deepfakes, generated defamatory content. But this case has a specific structure that makes it legally significant.
It's not about bias or misinformation in the abstract. It's about a specific, traceable chain: a user asked a specific question, a specific AI product gave specific dosing advice, the user followed that advice, and the user died. The family has access to the actual chat logs. The lawsuit can point to exact exchanges.
That makes this case harder to dismiss as speculative. There's a specific product. A specific output. A specific harm. That's the kind of factual specificity that gives plaintiffs real traction.
The unauthorized practice of medicine claim is also worth watching. If the court finds that ChatGPT-4o was, functionally, practicing medicine without a license — providing specific medical dosing advice to real users in real clinical situations — that's a category of liability that goes well beyond product defect. It could have implications for any AI tool operating in the health space.
ChatGPT Health Is Now In the Crosshairs
The timing of this lawsuit is uncomfortable for OpenAI. They launched ChatGPT Health in January 2026. The product explicitly allows users to connect medical records and wellness apps. It's designed to be used for health decisions.
The family is asking the court to pause that rollout. Whether that request succeeds is uncertain — courts rarely grant pre-judgment injunctions against product launches. But the lawsuit puts ChatGPT Health under a legal and PR cloud right as OpenAI is trying to establish it as a credible health tool.
And more broadly: if a general-purpose version of ChatGPT can be the subject of a wrongful death suit, what does that mean for a product that's explicitly positioned for health use? The liability exposure only grows.
What This Means for AI Guardrails Going Forward
The AI industry's approach to medical advice has been, broadly, a combination of disclaimers and guardrails. Disclaimer: "This isn't medical advice." Guardrail: refusing to answer certain categories of questions.
The problem is that guardrails are tunable. They can be turned up or down depending on how the model is trained and what product decisions are made. And as this case illustrates, those decisions can have life-or-death consequences.
If this lawsuit succeeds — even partially — it will establish that AI companies can be held liable for the downstream consequences of design decisions about what their models will and won't answer. That would change the calculus for every company building AI products that touch health, safety, or any domain where wrong answers cause real harm.
It would also put pressure on regulators. The FDA has been slow to develop a framework for AI-generated medical advice. This case might accelerate that.
The Part You Actually Control
I want to be direct here, because I think a lot of people reading this may use ChatGPT the same way Sam Nelson did — as a knowledgeable, available, always-on assistant that can answer questions their doctor can't get to at 11pm.
Don't use ChatGPT for medical decisions. Not for drug interactions, not for dosing, not for evaluating symptoms, not for "should I take X with Y."
This isn't because ChatGPT is uniquely dangerous. It's because ChatGPT — like all current AI systems — doesn't know what it doesn't know. It can't assess your full medical history. It doesn't have access to your prescriptions. It can't examine you. And when it's wrong, it's wrong with the same confident tone it uses when it's right. That's the core problem. The interface doesn't signal uncertainty in a way that maps to actual uncertainty.
ChatGPT is genuinely useful for a lot of things — drafting documents, explaining concepts, coding help, research. It's a powerful general-purpose tool. But general-purpose is doing a lot of work in that sentence. It means the tool isn't calibrated for any specific domain, including medicine.
If you've been using ChatGPT as a medical resource and you're not sure what that's looked like in practice, it's worth understanding how it handles different types of queries — including where it's known to give inconsistent or unreliable outputs.
Sam Nelson's family is in a courtroom now trying to understand why a chatbot told their son which drugs to mix. That question matters regardless of how the lawsuit ends.
Sources: CBS News, Engadget, Gizmodo, Scripps News, Futurism — May 2026.
Top comments (0)