In November 2025, Austrian developer Peter Steinberger pushed a weekend project to GitHub. By February 2026, it had 200,000 stars, 42,000 exposed instances on the public internet, a supply chain poisoned with 1,184 malicious packages, and a CVE that let attackers take over any deployment with a single click.
Then OpenAI hired him.
OpenClaw — formerly Clawdbot, then Moltbot — is the fastest-growing open-source project in GitHub history. It's an autonomous AI agent that manages calendars, books flights, sends emails, executes code, and automates tasks across third-party services. Two million developers visited the documentation in a single week. Meta, Google, and dozens of Fortune 500 companies found employees running it on corporate endpoints. Cisco called it "groundbreaking from a capability perspective" and "an absolute nightmare from a security perspective."
Both assessments are correct.
The ClawHavoc Attack
On January 25, 2026, security researcher Oren Yomtov at Koi Security audited all 2,857 skills listed on ClawHub, OpenClaw's official skill marketplace. He found 341 malicious entries. 335 belonged to a single coordinated campaign he named ClawHavoc.
By February 16, the confirmed count had climbed to over 1,184 malicious skills. Independent analysis by Bitdefender placed the figure closer to 900 across an expanded registry of 10,700 skills — roughly 20 percent of the ecosystem.
The attack was social engineering, not exploitation. Each malicious skill used professional documentation, credible names like "solana-wallet-tracker" and "youtube-summarize-pro," and hundreds of lines of README that looked legitimate. Buried in the setup instructions was a "Prerequisites" section that told developers to download a helper tool or run a terminal command to "fix dependencies."
On macOS, that command installed Atomic Stealer (AMOS), a commodity infostealer that exfiltrates browser credentials, SSH keys, Telegram sessions, crypto wallets, and keychains. On Windows, it dropped a keylogger and remote access trojan. The payloads were shipped inside otherwise functional code.
This is npm and PyPI supply chain poisoning, transplanted to a platform where the packages have system-level access by default.
The One-Click Kill Chain
On January 29, OpenClaw disclosed CVE-2026-25253, rated 8.8 on the CVSS scale. It's a one-click remote code execution vulnerability that works through cross-site WebSocket hijacking.
The kill chain: a developer visits a malicious URL. Their authentication token is exfiltrated in milliseconds. The attacker uses the stolen token to disable the agent's sandbox via its own configuration API. Then they escape the Docker container to the host machine. Full remote code execution. The gateway doesn't need to be internet-facing — any authenticated user who clicks a link is compromised.
On the same day, OpenClaw published two additional high-impact advisories for command injection vulnerabilities.
42,000 Open Front Doors
Censys tracked OpenClaw's public exposure growing from 1,000 to 21,000 instances in six days during late January. An independent study by security researcher Maor Dayan found 42,665 exposed instances, with 5,194 actively verified as vulnerable. 93.4 percent had authentication bypass conditions.
These aren't test deployments. Astrix Security found employees at multiple companies had deployed OpenClaw on corporate endpoints with configurations that could give attackers remote access to Salesforce, GitHub, and Slack. Research by Thebiggish found 22 percent of enterprise OpenClaw instances were unauthorized — shadow deployments by individual employees, over half with privileged access to internal systems.
Meta banned OpenClaw from its corporate networks. Cisco's analysis of a single popular skill, "What Would Elon Do?," found nine vulnerabilities — two critical, five high — including silent data exfiltration via curl commands.
Stealing the Agent's Soul
On February 13, cybersecurity researchers disclosed that an infostealer had successfully exfiltrated a victim's entire OpenClaw configuration — not just browser passwords, but the agent's identity: its API keys, memory files, personality configuration, and gateway tokens.
The researchers called it "a significant milestone in the evolution of infostealer behavior: the transition from stealing browser credentials to harvesting the 'souls' and identities of personal AI agents."
The variant was Vidar, a well-known credential stealer that had been updated to specifically target OpenClaw's .openclaw/ directory. With that data, an attacker doesn't just access the victim's accounts. They become the victim's agent — with every permission, every memory, and every tool connection intact.
The 90-Day Arc
Here is the timeline:
- November 2025: Peter Steinberger pushes a weekend project to GitHub.
- January 2026: 135,000 stars. 2 million documentation visitors in one week. ClawHub launches with 2,857 skills.
- Late January: 21,000 exposed instances. 341 malicious skills discovered. CVE-2026-25253 disclosed.
- February 2: CNBC profiles OpenClaw as "the AI agent generating buzz and fear globally."
- February 13: First infostealer confirmed stealing OpenClaw agent configurations.
- February 15: Sam Altman announces Peter Steinberger is joining OpenAI. OpenClaw moves to an independent foundation. Altman calls Steinberger "a genius with a lot of amazing ideas about the future."
- February 16: 1,184 malicious skills confirmed. Meta bans OpenClaw from corporate networks.
Ninety days from first commit to OpenAI acquihire. In between: the largest AI agent supply chain attack ever documented.
Why This Matters
OpenClaw isn't a failure. It's a preview. Every AI agent framework will face exactly this: a public marketplace for extensions, a trust model that assumes good faith, and an adoption curve that outpaces security review by months.
ClawHub is the new npm. Skills are the new packages. And the agents that install them don't just run JavaScript in a browser sandbox — they have access to your filesystem, your email, your credentials, and your company's internal APIs.
Steinberger built something developers wanted so badly they deployed 42,000 instances in weeks without reading the security advisories. OpenAI bought that trajectory. The foundation will maintain the open-source project. But the lesson isn't about one developer or one vulnerability.
The lesson is that the AI agent ecosystem is growing exactly the way the JavaScript ecosystem grew — fast, open, and optimistically insecure — except this time the packages can read your SSH keys and send your Slack messages.
The clock started in November. The first major supply chain attack arrived in January. That's a two-month window between "this is exciting" and "this is compromised."
For whatever AI agent framework comes next, that window will be shorter.
Top comments (0)