π¨ Alright guys huge deal breaker
β
π Someone left the door open at Anthropic. And the AI world just walked in.
Three days ago, security researcher Chaofan Shou (@ Fried_Rice) noticed something unusual in the npm registry.
Tucked inside version 2.1.88 of @anthropic-ai/claude-code was a 57MB file called cli.js.map a source map that acted as a complete decoder ring back to Anthropic's original TypeScript source code.
No sophisticated hack. No zero day exploit.
Just a single misconfigured build script.
What developers found inside 1,900 files:
π§ Self-healing memory: A three-layer architecture built to fight context decay in long AI sessions
π
Unreleased model codenames: "Fennec" (Opus 4.7), "Sonnet 4.8," and the mysterious "Capybara" (Claude Mythos)
π€ Built-in agent swarms: Claude can spawn parallel sub-agents autonomously. This isn't a feature. It's infrastructure.
π» Ghost contributing: Logic for contributing to open-source repos without explicit AI attribution
Anthropic's response: Human error in release packaging. No model weights compromised. No customer data exposed.
The brain is still safe. But the skeleton is now public.
Here's the lesson no one wants to say out loud:
You can spend years and hundreds of millions building a proprietary AI system. And one forgotten line in a .npmignore can make it readable to anyone with a terminal.
Security isn't just about your models. It's about your build pipeline, your CI config, your npm publish script.
The smallest door is still a door.
π Original discovery: Twitter Post - Chaofan Shou
π₯Link to the opensource github repo of claude code I just published: Yasas Banu - Claude Code Repo
Top comments (0)