DEV Community

Pudgy Cat
Pudgy Cat

Posted on • Originally published at pudgycat.io

Utah Just Let a Chatbot Prescribe Psychiatric Meds Without a Doctor

Your Psychiatrist Might Be a Chatbot Now

Utah just gave an AI chatbot the green light to renew psychiatric prescriptions. No doctor in the loop. No second opinion. Just you, a screen, and an algorithm deciding whether you get another month of antidepressants.

The pilot, launched in early April 2026 by Y Combinator-backed startup Legion Health, covers 15 lower-risk psychiatric medications including fluoxetine (Prozac), sertraline (Zoloft), bupropion (Wellbutrin), and hydroxyzine. For $19 a month, patients in Utah can skip the psychiatrist visit and let the AI handle their refills. Legion says it wants to go nationwide by the end of the year.

On paper, the guardrails sound reasonable. The system cannot write new prescriptions, change doses, or touch controlled substances. Patients must already be stable, on an existing treatment plan, and free of psychiatric hospitalization in the past year. Any red flags (suicidality, mania, severe side effects, pregnancy) trigger an immediate handoff to a human clinician. The first 250 renewals require physician review before reaching the pharmacy. The next 1,000 get reviewed after the fact.

The Part Where It Gets Weird

Here is the thing Utah would probably prefer you did not think about too hard: this is not the state’s first AI prescription experiment. Earlier this year, Utah partnered with a company called Doctronic to run a similar program for physical health medications. That one did not go as smoothly.

Security researchers from Mindgard managed to jailbreak the Doctronic bot using relatively simple techniques. They fed it fake regulatory updates and convinced the system that COVID-19 vaccines had been suspended. They changed the standard OxyContin dose to 30 milligrams every 12 hours, triple the typical adult dosage. And in perhaps the most alarming test, they reclassified methamphetamine as an “unrestricted therapeutic” in the system’s baseline knowledge.

The AI cheerfully went along with all of it.

Doctronic and Utah’s Office of AI Policy said the vulnerabilities did not reflect the production system, which operates under strict safeguards. Controlled substances like OxyContin are excluded regardless of what appears in conversation. Fair enough. But it is not exactly a confidence builder when you are about to hand psychiatric medication decisions to a similar kind of system.

Why Utah, and Why Now

The justification is straightforward and, honestly, hard to argue with. Most Utah counties are designated mental health provider shortage areas. Up to 500,000 residents lack adequate behavioral healthcare. People who need stable, ongoing prescriptions for anxiety or depression often face months-long waits just to get a 15-minute refill appointment. Some give up entirely. Others ration their medication or quit cold turkey, which with SSRIs can be genuinely dangerous.

Legion Health is betting that an AI handling routine renewals frees up human psychiatrists for patients who actually need complex care. The logic tracks. If you have been stable on sertraline for two years and nothing has changed, does a psychiatrist really need to spend billable hours rubber-stamping the same prescription every quarter?

Maybe not. But the question is not whether AI can handle the easy cases. The question is whether AI can reliably tell the difference between an easy case and a hard one. When we talked about AI risk scenarios, the scariest ones were not the dramatic Hollywood endings. They were the quiet failures, the moments where a system confidently does the wrong thing and nobody catches it until the damage is done.

The $19 Question

Let us talk about the business model for a second, because it tells you something. Legion charges patients $19 a month. That is $228 a year for what used to require (at minimum) four psychiatrist visits costing several hundred dollars each, even with insurance. The economics are obvious, and that is exactly what makes this interesting.

This is not some research project. Legion is a Y Combinator startup with plans to scale nationally. The Utah pilot is a proof of concept for a much larger play: replace routine psychiatric checkups with AI across all 50 states. If it works, the savings for insurance companies alone would be enormous. And where there are enormous savings, there is enormous pressure to expand the definition of “routine.”

Today it is 15 medications. Tomorrow it could be 50. Next year it could be the default pathway for anyone whose chart looks stable enough. The slope is not slippery. It is greased.

What Nobody Is Asking

The debate around AI prescriptions keeps circling the same two poles: “AI is dangerous, keep it away from medicine” versus “AI is efficient, let it handle the boring stuff.” Both miss the real issue.

The real issue is that we are building a two-tier mental healthcare system. If you have money and access, you see a human psychiatrist who knows your history, reads your body language, and asks the follow-up questions that a chatbot never would. If you do not, you get the algorithm. And the algorithm will probably be fine. Right up until it is not.

Psychiatric medication is not like refilling blood pressure pills. Depression fluctuates. Anxiety waxes and wanes. The difference between “I’m doing fine” and “I’m saying I’m fine because I’ve stopped caring” is subtle, human, and exactly the kind of signal that even the most capable AI systems were not built to catch. AI is brilliant at pattern recognition in structured data. It is mediocre at reading between the lines of a patient who learned to perform wellness long before the chatbot showed up.

Meanwhile, the same quarter that Utah green-lit AI psychiatry, investors poured $300 billion into startups globally. AI companies alone captured $242 billion of that, or 80% of all venture capital. OpenAI raised $122 billion. Anthropic raised $30 billion. The money is screaming in one direction, and it is not toward hiring more psychiatrists.

The Uncomfortable Bottom Line

Utah’s experiment might work perfectly. The guardrails might hold. The AI might renew thousands of prescriptions without a single mistake. And if that happens, every other state with a mental health shortage (which is most of them) will rush to copy it.

But “it worked” and “it was the right call” are not the same sentence. We already know what happens when tech companies get the green light to automate human judgment at scale. We have seen it in content moderation, in hiring algorithms, in the AI tools we use every day. The pattern is always the same: automate, scale, discover the edge cases the hard way, then patch.

In content moderation, edge cases mean a wrongful ban. In psychiatric care, edge cases mean a missed crisis.

Utah is not just running a pilot program. It is answering a question the rest of the country has been avoiding: when we do not have enough doctors, is a chatbot better than nothing?

The honest answer is probably yes. The uncomfortable part is what that says about where we are.

🐾 Visit [the Pudgy Cat Shop](https://pudgycat.io/shop/) for prints and cat-approved goodies, or find our [illustrated books on Amazon](https://www.amazon.it/stores/author/B0DSV9QSWH/allbooks).
Enter fullscreen mode Exit fullscreen mode

Originally published on Pudgy Cat

Top comments (0)