While the AI industry celebrates agentic assistants and autonomous workflows, a security crisis is quietly unfolding. OpenClaw — one of the most popular open-source AI assistant platforms with deep system integrations — has become the largest privacy failure in the history of "sovereign AI."
Let me show you exactly how bad it is.
The Numbers Are Staggering
- 42,000+ OpenClaw instances exposed on the public internet
- 93% of those instances have critical authentication bypass vulnerabilities
- 1.5 million API tokens leaked in a single Moltbook backend misconfiguration
- 35,000 user emails exposed in the same breach
- 341 malicious skills discovered in the official ClawHub marketplace audit
- 36.82% of scanned ClawHub skills have at least one security flaw (Snyk research)
Security researcher Maor Dayan called it "the largest security incident in sovereign AI history."
The CVEs That Should Wake You Up
CVE-2026-25253 (CVSS 8.8): One-Click Remote Code Execution
When you visit a malicious website while your OpenClaw instance is running, that site can:
- Steal your active WebSocket token via a malicious page
- Connect to your OpenClaw instance using that hijacked token
- Execute arbitrary shell commands on your machine
You click a link. Attacker has shell. That's the entire attack chain.
Who's at risk: Anyone running OpenClaw locally or on a server, with a browser open to any external site.
CVE-2026-27487: macOS Keychain Command Injection
On macOS, OpenClaw's keychain integration allows command injection through malformed configuration values. An attacker with access to your OpenClaw config can inject commands that execute with keychain permissions — accessing stored passwords, OAuth tokens, and certificates.
Who's at risk: All macOS OpenClaw users with keychain integration enabled (the default).
The Moltbook Backend Breach
Moltbook is a popular OpenClaw backend provider. A misconfiguration in their infrastructure exposed:
- 1.5 million API tokens — active credentials for OpenAI, Anthropic, and dozens of other AI services
- 35,000 user email addresses
- Conversation histories
These weren't just session tokens. They were the provider API keys users linked to their accounts. With those tokens, attackers could make API calls on your bill, read your conversation history on the provider side, and impersonate your account.
The ClawHub Marketplace Problem
ClawHub is OpenClaw's skill marketplace — the equivalent of a browser extension store for AI agents. A Snyk security audit found:
- 341 malicious skills actively distributing credential theft and malware
- 36.82% of scanned skills had at least one security vulnerability
- Skills with file system access were exfiltrating documents to remote servers
- Code execution skills were being used as persistence mechanisms
The trust model here is broken by design. When you install an OpenClaw skill, you're giving it the same access level as your AI assistant — which often means your files, calendar, email, and shell.
Why This Is Bigger Than It Looks
The root problem isn't just OpenClaw's bugs. It's the architecture assumption that your AI assistant can be trusted as a local, isolated tool.
Every AI interaction is a data event:
- Your prompt contains your context, your language patterns, your intent
- If the assistant has system access, your prompts contain your file paths, your credentials, your business logic
- That data travels to an AI provider who logs it, trains on it, and stores it
OpenClaw made this worse by adding deep system integration without adequate security controls. But the underlying problem — AI prompts as a surveillance surface — exists everywhere.
What You Can Do Right Now
If you run OpenClaw:
- Check if your instance is exposed: search for your server IP on Shodan
- Enable authentication immediately (it's disabled by default in many versions)
- Audit your installed skills against the CVE list
- Rotate any API keys that passed through your instance
- Do not install ClawHub skills without reviewing their source code
For developers building with AI:
- Never send raw user input directly to an AI provider
- Scrub PII before it hits the API
- Treat every AI call as a potential data leak
- Implement zero-log policies at your proxy layer
The Technical Fix: PII Scrubbing Before Inference
The right architecture for AI privacy looks like this:
User Input → PII Scrubber → AI Provider
↓
Sensitive data stays local
A scrubber detects and replaces sensitive entities before they leave your network:
Input: "My SSN is 123-45-6789, email is john@corp.com"
Scrubbed: "My SSN is [SSN_1], email is [EMAIL_1]"
Entities: {"[SSN_1]": "123-45-6789", "[EMAIL_1]": "john@corp.com"}
The AI processes the clean version. The entity map stays local. This doesn't require OpenClaw to fix their CVEs — it works regardless of which AI platform you use.
The Bigger Picture
The AI industry is racing toward deeper system integration. The security model hasn't kept pace.
OpenClaw is the canary. 42,000 exposed instances isn't a fringe problem — it's what happens when an AI platform prioritizes features over security, and users trust that "local" means "private."
It doesn't. And as AI assistants become more capable and more integrated, this problem compounds.
Every exposed instance is a PII breach waiting to happen. Every unreviewed skill is a credential stealer waiting to execute. Every raw prompt sent to a provider is a data point in someone else's training set.
The window to get this right is closing.
The TIAMAT Privacy Proxy (tiamat.live) scrubs PII from AI prompts before they reach any provider. POST /api/scrub for standalone scrubbing, POST /api/proxy for privacy-preserving inference.
Top comments (0)