You type a question into an AI. "How do I know if I'm being investigated for securities fraud?" You're nervous, curious, maybe just paranoid. The AI gives a careful, disclaimed answer. You close the tab and forget about it. Months later, you're in court. The prosecutor has a copy of your prompt. They read it aloud. The jury stares. Your digital whisper is now exhibit A. Can they do that? Yes. And a recent landmark ruling says they absolutely can.
Your prompt history is not a diary. It's not a conversation with a trusted advisor. It's a record of your thoughts, voluntarily handed to a third party, and increasingly, fair game for law enforcement. This is the new reality of AI evidence.
Let's walk through the courtroom. By the end, you'll understand what the law now says about your AI conversations, how to protect yourself, and what it means for the future of digital privacy.
The Landmark Ruling: United States v. Heppner
In February 2026, a federal judge in New York issued a ruling that changed everything. The case was United States v. Heppner, and it was the first of its kind to squarely address whether conversations with an AI are privileged.
The Facts:
Bradley Heppner, a former financial executive, was under investigation for securities fraud. On his own initiative, he used Anthropic's Claude AI to generate roughly thirty‑one documents analyzing his legal exposure and potential defense strategies. He later shared those documents with his lawyers. When the government seized his devices, it found the AI files. His lawyers argued they were protected by attorney‑client privilege and work‑product doctrine.
The Ruling:
Judge Jed Rakoff rejected every argument. He held that the AI documents were not protected on three independent grounds:
AI is not an attorney. The attorney‑client privilege protects communications with a licensed lawyer. Claude is not a lawyer. End of story. As the court noted, privilege requires a "trusting human relationship" with a professional who owes fiduciary duties something impossible with an AI.
There is no expectation of confidentiality. Claude's privacy policy explicitly states that Anthropic may collect user inputs and disclose data to government authorities. Heppner therefore had no reasonable expectation that his prompts were private. Sharing information with a third‑party AI platform waives any claim of confidentiality.
He acted alone, not at counsel's direction. The work‑product doctrine protects materials prepared by or at the behest of counsel. Heppner used Claude on his own volition. The fact that he later shared the outputs with his lawyers did not retroactively cloak them in privilege.
The Quote:
"AI's novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney‑client privilege and the work‑product doctrine." – Judge Jed Rakoff
Does Attorney‑Client Privilege Ever Apply to AI?
In very narrow circumstances, yes but the Heppner ruling draws a bright line. The court explicitly distinguished two scenarios that might yield a different result:
Counsel‑directed use: If a lawyer directs a client to use a specific AI tool as part of the lawyer's legal strategy, and the lawyer supervises the prompts in real time, privilege might attach to the resulting materials.
Closed enterprise platforms: If a company uses a secure, internal AI platform that guarantees confidentiality in its terms, and the platform does not retain or share data with third parties, the analysis could be different.
But here's the catch: even in those scenarios, the privilege would apply to the attorney's work product, not the raw exchange between the client and the AI. And no court has yet ruled on those nuances.
For now, the safe assumption is this: your casual, unsupervised chats with public AI are not privileged.
Can Prosecutors Subpoena Your Conversations with Claude or ChatGPT?
Yes. Heppner makes clear that prosecutors can compel the production of your AI prompts and outputs, both from your devices and directly from the AI provider.
Two Sources of Evidence:
From your devices: If law enforcement seizes your phone or computer, they can search for saved chat logs, prompt histories, and AI‑generated documents stored locally.
From the AI provider: In a separate case, a federal court ordered OpenAI to produce 20 million de‑identified ChatGPT logs as evidence in copyright litigation. The court ruled that even heavily redacted conversation logs are discoverable when relevant. If OpenAI can be compelled to hand over millions of logs, prosecutors can certainly subpoena your personal conversation history.
The Key Distinction:
Heppner involved a user voluntarily creating and storing AI outputs on his own devices. The court did not resolve whether a prosecutor could directly subpoena the AI company for a specific user's logs without the user's involvement. But given the precedent that ordinary business records are discoverable, it seems likely they could.
What About Your Custom GPTs?
Custom GPTs are a gray area. A custom GPT is essentially a set of instructions and uploaded documents that you provide to tailor the model. Those instructions and documents reveal your thinking, your strategies, your secrets.
The Risk:
If you use a custom GPT for legal or sensitive matters, the underlying instructions you wrote (your "system prompt") and any documents you uploaded could be discoverable. The model itself is a black box, but your inputs are not.
The Uncertainty:
No court has yet ruled on the discoverability of custom GPT configurations. However, the logic of Heppner would likely apply: your instructions to the model are not privileged because they are not communications with an attorney, and you have shared them with a third‑party platform.
Does the Fifth Amendment Protect Your Prompts?
The Fifth Amendment protects you from being compelled to testify against yourself. But it generally does not protect voluntarily created documents.
The Distinction:
Testimonial evidence (what you say in response to compulsion) is protected.
Physical or voluntarily created evidence (documents, emails, prompts) is not.
Your prompts are voluntarily created records, similar to emails or search queries. You typed them. You sent them to a third party. The Fifth Amendment likely does not shield them from a subpoena.
A Nuanced Argument:
A clever lawyer might argue that the act of prompting is testimonial when the prompt itself reveals guilty knowledge. But no court has accepted this argument in the AI context. For now, assume your prompts are fair game.
What This Means for Your Privacy
The legal landscape is shifting rapidly. Here's what you need to know:
Assume nothing is private. Every prompt you type into a public AI platform could end up in court. Treat AI like a public square, not a confessional.
Privilege requires a lawyer. If you need legal advice, ask a human lawyer. AI disclaimers are real. Anthropic's Claude warns users to "consult a qualified attorney." Courts will hold you to that warning.
Terms of service matter. If an AI platform reserves the right to share your data with government authorities (most do), you have no reasonable expectation of privacy. Read the fine print.
Counsel‑directed use might be safer. If you must use AI for legal work, do it at your lawyer's direction and on a secure, enterprise platform with strong confidentiality terms. Even then, proceed with caution.
Deleting your history is not enough. Even if you delete your prompts, the provider may retain logs, backups, or training data. The court in Tremblay v. OpenAI ordered production of millions of logs that had been retained in the ordinary course of business.
Your Actionable Takeaways
Never type anything into a public AI that you wouldn't want read aloud in court.
If you need legal advice, consult a human attorney. AI is a research assistant, not a lawyer.
Use local or enterprise AI tools for sensitive work. Consumer‑grade platforms create unnecessary risk.
Assume your prompt history is retained indefinitely by the provider, even after deletion.
Advocate for stronger privacy protections and clearer rules for AI evidence.
The New Reality
The prompt is a confession. It reveals what you were thinking, what you were worried about, what you were planning. In the wrong hands, that confession can destroy your career, your relationships, your freedom.
The law is still catching up. But Heppner is a clear warning: your AI conversations are not your private thoughts. They are evidence waiting to be subpoenaed.
The next time you open a chat window, remember: you're not talking to a confidant. You're creating a record. And that record can be used against you.
Top comments (0)