TL;DR
Are AI meeting assistants safe? Not all of them. In 2025, Otter.ai faced a class action lawsuit for recording without consent, Fireflies.ai was sued for biometric data collection, and Chapman University banned Read AI over security concerns. Cloud meeting tools pose real privacy risks: data stored on third-party servers, AI training on your conversations, and recordings without proper consent. For truly safe meeting transcription, local processing (like Meetily) keeps your data on your device.
Over 62% of organizations now use AI meeting assistants. The market has exploded to $3.5 billion in 2025 and is projected to reach $34 billion by 2035.
But as adoption grows, so do the lawsuits, university bans, and privacy scandals.
If you're using cloud-based meeting AI, you need to understand the risks. And if you're evaluating options, this guide will help you separate the safe tools from the problematic ones.
The Wake-Up Call: 2025's AI Meeting Assistant Lawsuits
Otter.ai Class Action (August 2025)
In August 2025, Justin Brewer filed a class action lawsuit against Otter.ai in California federal court. The allegations are serious:
"The company's AI-powered transcription service... by default does not ask meeting attendees for permission to record and fails to alert participants that recordings are shared with Otter to improve its artificial intelligence systems."
Key allegations from the complaint:
- Otter's "OtterPilot" auto-joins meetings without consent from all participants
- Meeting data is used to train Otter's AI models without explicit permission
- The service outsources consent obligations to customers rather than obtaining it directly
- Violations of the Electronic Communications Privacy Act (ECPA), Computer Fraud and Abuse Act (CFAA), and California Invasion of Privacy Act (CIPA)
The case (Brewer v. Otter.ai Inc., Case No. 5:25-cv-06911) is still pending, with Otter's response deadline extended to November 2025.
⚠️ California Is a Two-Party Consent State
California law requires consent from ALL parties before recording a conversation. Otter's approach of seeking consent only from the meeting host-not all participants-directly conflicts with this requirement.
Fireflies.ai Lawsuit (December 2025)
Just months later, Fireflies.ai faced its own legal challenge. Illinois resident Katelin Cruz filed a complaint alleging:
"Fireflies.AI's meeting assistant records, analyzes, transcribes, and stores the unique vocal characteristics (i.e., 'voiceprints') of every meeting participant... including people who never created a Fireflies account, never agreed to its terms of service, and never gave written consent."
The lawsuit alleges violations of Illinois' Biometric Information Privacy Act (BIPA):
- Collection of biometric voice data without consent
- No published policy on data retention or destruction
- Users report the bot continues joining meetings even after deleting their account
Chapman University Bans Read AI (August 2025)
Academic institutions are taking action too. In August 2025, Chapman University's Information Systems department issued a security notice:
"After investigation, the use of Read AI is prohibited due to security, privacy, and institutional data risks."
The university cited concerns about:
- Data leaving institutional control
- Lack of adequate data protection
- Security vulnerabilities in free AI tools
This wasn't an isolated decision. Organizations across healthcare, education, and government are increasingly restricting cloud-based meeting AI.
Why Cloud Meeting AI Poses Privacy Risks
The lawsuits above aren't random-they reflect systemic problems with how cloud meeting tools handle your data.
1. Your Conversations Train Their AI
Most cloud meeting tools use your transcripts to improve their models. From Otter.ai's own documentation:
"Otter uses a proprietary method to de-identify user data before training our models."
Even "de-identified" data creates risks. The Otter.ai lawsuit alleges that participants weren't informed their conversations would train AI-a significant consent violation.
2. Data Stored on Third-Party Servers
When you use cloud meeting AI, your recordings live on someone else's servers:
| Provider | Data Location | Your Control |
|---|---|---|
| Otter.ai | AWS (US) | Limited |
| Fireflies | Cloud (US) | Limited |
| Read AI | Cloud | Limited |
| Meetily | Your device | Complete |
This matters because:
- 29% of AI breaches stem from third-party SaaS platforms
- The average cost of an AI-related breach is $5.72 million
- Organizations with unmonitored AI face breach costs $670,000 higher than those with controls
3. Auto-Join Without Consent
A particularly concerning pattern: cloud meeting tools that automatically join meetings without explicit permission from all participants.
From user complaints about Fireflies:
"By default, Fireflies joins every calendar event and shares meeting notes with all attendees... Trying to shut it up/switch it off is like trying to remove a deer tick from your leg."
This isn't just annoying-it potentially violates wiretapping laws in 12 states that require all-party consent:
- California
- Florida
- Illinois
- Maryland
- Massachusetts
- Michigan
- Montana
- New Hampshire
- Pennsylvania
- Washington
- Connecticut (civil liability)
- Delaware
4. Shadow AI Is Exploding
Gartner predicts 40% of data breaches will involve "shadow AI" by 2027-unauthorized AI tools used by employees without IT approval.
Meeting AI is particularly vulnerable because:
- Employees install browser extensions without oversight
- Calendar integrations automatically activate
- IT may not know which meetings are being recorded
40% of organizations have already experienced an AI-related privacy incident, and 97% of breached organizations lacked adequate AI access controls.
Cloud vs. Local: A Security Comparison
| Feature | Local AI (Meetily) | Cloud AI (Otter, Fireflies) |
|---|---|---|
| Data Location | ✅ | ✅ |
| AI Training | ✅ | ✅ |
| Consent Required | ✅ | ✅ |
| Breach Exposure | ✅ | ✅ |
| Offline Capability | ✅ | ✅ |
| Compliance Control | ✅ | ✅ |
| Audit Transparency | ✅ | ✅ |
Who Should Avoid Cloud Meeting AI?
Based on the legal landscape and compliance requirements, these organizations should think twice about cloud-based meeting transcription:
Healthcare Organizations (HIPAA)
Meeting recordings often contain Protected Health Information (PHI). While Otter.ai achieved HIPAA compliance in July 2025 and Fireflies launched "Fireflies for Healthcare", these require:
- Enterprise plans ($25-35+/user/month)
- Business Associate Agreements (BAAs)
- Trust in third-party data handling
Local processing eliminates third-party risk entirely-no BAA needed when PHI never leaves your device.
Legal Firms (Privilege Concerns)
Legal experts warn that allowing meeting AI vendors to access transcripts could waive attorney-client privilege:
"Allowing note-taker vendors to access or use transcripts for their own purposes could provide grounds for a waiver of privilege, which is problematic in the event of a legal dispute."
Financial Services (SOX, PCI, SEC)
Financial regulations require strict data controls. GSC data shows active searches for "ai meeting assistants data security financial" and "ai meeting assistant financial regulations."
Cloud tools create audit complexity that local processing avoids.
Education (FERPA, Institutional Policies)
Following Chapman University's lead, many institutions are implementing AI tool review processes. Cloud meeting AI often fails these reviews.
European Organizations (GDPR)
Fireflies processes data on US servers, creating GDPR transfer issues. The GDPR requires adequate protection for data leaving the EU-US cloud services face ongoing legal uncertainty here.
ℹ️ GDPR Tip: Consent Isn't Always Enough
Even with consent, GDPR's data minimization principle may conflict with meeting AI that records, stores, and analyzes everything. Local processing with user-controlled retention is the safest approach.
How to Evaluate Meeting AI Security
Before choosing a meeting assistant, ask these questions:
1. Where Does Data Go?
- Best: Stays on your device (local processing)
- Acceptable: Your company's cloud (self-hosted)
- Risky: Vendor's cloud servers
2. Is It Bot-Free?
Visible meeting bots create friction and often trigger the consent issues at the heart of current lawsuits. Bot-free options:
- Meetily - Captures system audio locally
- Granola - Mac-native, bot-free (but cloud processing)
- Platform-native - Zoom AI, Microsoft Copilot (no external bot)
3. What's the Consent Model?
- Does it require consent from ALL participants?
- Can it auto-join without explicit approval?
- What happens when someone says "don't record"?
4. Is the Code Auditable?
Open source tools let you verify exactly what happens to your data. Proprietary tools require trust.
| Tool | Open Source | Auditable |
|---|---|---|
| Meetily | Yes (MIT) | Full |
| Otter.ai | No | No |
| Fireflies | No | No |
| Whisper (model) | Yes | Full |
5. What's the Data Retention Policy?
The Fireflies lawsuit specifically cites lack of published retention/destruction policies as a BIPA violation.
The Local-First Alternative
If the risks above concern you, local AI processing solves them architecturally-not through policy promises, but through technical design.
How local meeting AI works:
- Audio captured from your device's system audio
- Transcription runs on your CPU/GPU using models like OpenAI Whisper
- Summaries generated locally (or with your chosen API)
- All data stays on your machine
What this means for privacy:
- No server breach can expose your meetings (data isn't there)
- No consent ambiguity (you control the recording)
- No AI training on your conversations (data never leaves)
- Complete GDPR/HIPAA compliance (no third-party processors)
Try Local Meeting AI
Meetily is free, open source, and processes everything on your device. No cloud, no bots, no compromise.
Best Practices for Any Meeting AI
Whether you choose cloud or local, follow these guidelines:
For Organizations
- Create an approved tools list - Don't let employees install random meeting AI
- Review privacy policies - Especially data training and retention clauses
- Establish consent protocols - How do you notify participants?
- Consider jurisdiction - 12 states require all-party consent
- Audit regularly - What AI tools have calendar access?
For Individuals
- Read the privacy policy - Especially the AI training section
- Check auto-join settings - Disable if possible
- Announce recordings - Even if the tool doesn't
- Review data retention - Delete recordings you don't need
- Consider local options - For sensitive conversations
For Meetings with External Parties
- Always disclose AI recording - Legal and ethical requirement
- Offer to disable for sensitive topics - Build trust
- Don't rely on bot detection - Some tools don't make bots obvious
- Get explicit consent - Verbal isn't enough in some states
Frequently Asked Questions
Frequently Asked Questions
Q: Are AI meeting assistants safe to use?
It depends on the tool. Cloud-based AI meeting assistants like Otter.ai, Fireflies, and Read AI have faced lawsuits and institutional bans over privacy concerns. Local AI tools like Meetily that process data on your device are architecturally safer because your data never leaves your control.
Q: Is Otter AI safe?
Otter.ai is currently facing a class action lawsuit (Brewer v. Otter.ai, August 2025) alleging recording without consent and using data to train AI without permission. Otter has SOC 2 certification and achieved HIPAA compliance in July 2025, but the consent model-where the bot auto-joins and seeks host consent only-remains legally contested in all-party consent states.
Q: Is Fireflies AI safe?
Fireflies.ai faces a December 2025 lawsuit alleging illegal biometric data collection under Illinois BIPA. Users have reported the bot continuing to join meetings even after account deletion. The tool processes all data on cloud servers, creating exposure if Fireflies is breached.
Q: Why was Read AI banned by universities?
Chapman University banned Read AI in August 2025 citing security, privacy, and institutional data risks. The tool had concerning behaviors like auto-joining meetings and following users across platforms. Universities must protect student data under FERPA and institutional policies that cloud AI tools often violate.
Q: What is the safest AI meeting assistant?
The safest AI meeting assistants use local processing, meaning data never leaves your device. Meetily is the leading option-its open source (MIT license), bot-free, works offline, and keeps all data on your machine. For cloud tools, platform-native options (Microsoft Copilot, Zoom AI) avoid third-party data exposure but still use cloud processing.
Q: Do AI meeting tools violate wiretapping laws?
Potentially. 12 US states require all-party consent before recording: California, Florida, Illinois, Maryland, Massachusetts, Michigan, Montana, New Hampshire, Pennsylvania, Washington, Delaware, and Connecticut. Cloud meeting tools that auto-join and seek only host consent may violate these laws-which is exactly what the Otter.ai lawsuit alleges.
Q: Can my employer record meetings with AI without telling me?
It depends on state law and company policy. In one-party consent states, employers can record if they participate. In two-party consent states, all participants must be informed. Either way, most workplace policies require disclosure. AI meeting tools should announce their presence-but not all do clearly.
Q: How do I protect my privacy in AI-recorded meetings?
Ask if the meeting is being recorded and by what tool. Request recording be turned off for sensitive topics. In sensitive industries, advocate for local AI tools that dont upload data to third parties. Review your organizations approved tools list-shadow AI is a major breach risk.
Key Takeaways
- Cloud meeting AI faces serious legal challenges: Otter.ai and Fireflies.ai both face 2025 lawsuits over consent and data practices
- Chapman University banned Read AI over privacy concerns-more institutional bans are likely
- 12 US states require all-party consent for recording, creating legal risk for auto-join meeting bots
- 40% of organizations have experienced AI-related privacy incidents; 97% lacked adequate controls
- Local AI processing (like Meetily) eliminates third-party data exposure by keeping everything on your device
- For HIPAA, GDPR, and legal compliance, local processing is architecturally safer than cloud promises
- Always disclose AI recording to meeting participants-its both legally and ethically required
Conclusion
The AI meeting assistant market is at an inflection point. Rapid adoption (62% of organizations) has outpaced privacy protections, leading to the lawsuits and bans of 2025.
The pattern is clear:
- Cloud tools that auto-join meetings are legally vulnerable
- Data used for AI training without explicit consent violates privacy expectations
- Institutions are starting to ban tools that can't prove adequate data protection
If your meetings contain sensitive information-client discussions, health data, legal strategy, financial details-cloud meeting AI creates unnecessary risk.
Local processing isn't just a privacy preference; it's becoming a compliance requirement.
Tools like Meetily that keep data on your device solve these problems architecturally. No cloud breach can expose your meetings. No ambiguous consent model. No third-party AI training on your conversations.
The lawsuits will be decided in court. But you don't have to wait to protect your organization's most sensitive conversations.
For more on meeting privacy, see our guides on GDPR-compliant meeting transcription and HIPAA meeting requirements.
Privacy-First Meeting AI
Meetily is free, open source, and keeps your meeting data where it belongs-on your device.
About the Author
Meetily Security Team - We build privacy-first AI tools for professionals who need meeting intelligence without compromising data sovereignty. Meetily is open source, HIPAA/GDPR compliant by design, and trusted by privacy-conscious organizations worldwide.
Top comments (0)