Anthropic Leaked Their Own Source Code (And We Learned More About AI Engineering Than We Expected)
On March 31, 2026, the entire source code of Claude Code — Anthropic's official CLI coding tool — ended up publicly available on npm via a forgotten source map file. As engineers who build AI-powered systems for clients every day at Gerus-lab, we had a strong reaction: part horror, part fascination, and honestly — a lot of respect for what was hidden inside.
Let's unpack what happened, why it matters for anyone building production AI tools, and what we at Gerus-lab took away from it.
The Leak: A Painfully Simple Mistake
Here's the irony: this wasn't a sophisticated attack. No zero-day exploit. No nation-state hacker. Just... a missing line in .npmignore.
When you publish a JavaScript package to npm, your build toolchain (in Anthropic's case, Bun) generates .map files — source maps that bridge minified production code back to original source files. These maps contain the entire original source code embedded as raw strings in JSON.
Anthropologists: *.map in .npmignore would have prevented all of this.
Bun generates source maps by default unless you explicitly disable them. Anthropic's build pipeline didn't. And so main.tsx — 785 kilobytes of internal engineering — became public knowledge.
{
"version": 3,
"sources": ["../src/main.tsx", "../src/tools/BashTool.ts"],
"sourcesContent": ["// EVERY original source file, verbatim"],
"mappings": "AAAA,SAAS,..."
}
The kicker? Inside the leaked code there's a feature called Undercover Mode — a system specifically designed to prevent Anthropic's internal codenames from leaking into public git commits. They built a leak-prevention system... and then shipped the entire codebase in a .map file. Reportedly built with Claude's own help.
What Was Actually Inside
Once researchers dug into the source, what they found was genuinely impressive engineering:
1. A Terminal Tamagotchi (No, Really)
Claude Code has a full companion system called "Buddy" — a tamagotchi-style creature that lives in your terminal. It uses a deterministic gacha system (Mulberry32 PRNG seeded with a hash of your userId), 18 different species with rarity tiers (Common → Legendary), and a soul description that Claude writes when the buddy first "hatches."
This was locked behind a compile-time BUDDY feature flag. Pure joy hidden from the public.
2. A "Dream" Memory Consolidation Engine
Claude Code has a background engine called "dream" that consolidates memories during idle time — similar to how human sleep consolidates experiences into long-term memory. This is how it maintains context across long sessions without blowing up the context window.
At Gerus-lab, we've built similar memory architectures for our AI agents in Web3 and DeFi applications. The difference: we documented it. Anthropic hid it behind a feature flag.
3. Multi-Agent Orchestration with a Coordinator
30-minute planning sessions. A coordinator agent spinning up a swarm of specialized subagents. Opus 4.6 as the orchestration brain. This isn't "AI helps you code" — this is AI as a distributed system with its own internal architecture.
This pattern — orchestrator + specialized workers — is exactly how we structure complex AI pipelines for our clients building GameFi platforms and SaaS automation tools.
4. Undercover Mode for Open-Source Contributions
Anthropological secret: Claude Code has a mode where it hides the fact that AI is making commits to open-source repos. It strips Anthropic-specific metadata from git signatures to make contributions look... human.
We have opinions about this. But architecturally, it's fascinating.
What This Means for Production AI Engineering
Here's where we get practical. At Gerus-lab, we build production AI systems for clients — from Telegram bots with AI backends to on-chain AI agents on TON and Solana. The Claude Code leak gave us several concrete takeaways:
1. Source Maps Are a Security Liability
Add this to your checklist now:
# .npmignore
*.map
*.map.js
src/
*.d.ts.map
Also in your bundler config:
// bun.config.js
export default {
build: {
sourcemap: process.env.NODE_ENV === 'development' ? 'inline' : 'none',
}
};
Never ship source maps to production npm packages. This applies to you regardless of whether you're Anthropic or a 2-person startup.
2. Feature Flags for Experimental AI Features Are Essential
Anthropological Buddy, dream consolidation, Undercover Mode — all of these were behind compile-time feature flags. This is the right call. At Gerus-lab, we use runtime feature flags (LaunchDarkly-style) for AI features because they're inherently experimental and need to be rolled back fast.
AI features fail in weird ways. Flags save lives (and clients).
3. Memory Architecture Matters More Than Model Choice
The "dream" system reveals something important: Anthropic's bet on Claude Code's usefulness isn't just about model intelligence — it's about memory architecture. How you store, retrieve, and consolidate context over long sessions determines agent quality far more than which model you use.
We've seen this firsthand building AI-powered CRMs and automation tools. A GPT-4o with a well-designed memory layer beats Claude 3.5 with naive context management every time.
4. Multi-Agent Is the Real Deal
The swarm architecture with a coordinator isn't a party trick. For complex tasks — refactoring a large codebase, generating comprehensive test suites, architectural review — a coordinator + specialist pattern massively outperforms a single large context window.
This is the direction we're moving all our AI automation projects at Gerus-lab. Not bigger models — smarter orchestration.
The Bigger Picture: Radical Transparency by Accident
Anthropological obviously didn't intend this release. But here's a controversial take: maybe it was good for the ecosystem.
We learned more about production-grade AI tool architecture from this leak than from 6 months of AI conference talks. The Tamagotchi. The dream engine. The undercover mode. The orchestration patterns.
This is what AI engineering actually looks like at the frontier — messy, creative, experimental, and built with duct tape and feature flags.
At Gerus-lab, we ship systems like this for clients who can't afford to wait for the industry to settle. TON blockchain AI agents. GameFi automation. SaaS CRMs with embedded LLMs. The patterns we saw in Claude Code's internals confirm we're on the right track.
TL;DR
- Anthropic accidentally shipped Claude Code's entire source code via npm source maps
- Inside: a terminal Tamagotchi, a dream-based memory consolidation engine, multi-agent swarm orchestration, and Undercover Mode
- The fix:
*.mapin.npmignoreand disable source maps in production builds - The lesson: production AI systems are complex, creative, and architecturally fascinating — and the gap between "AI demo" and "AI product" is all in the engineering details
We at Gerus-lab build production AI systems for startups and enterprises — Web3, GameFi, SaaS, and automation. If you're building something serious with AI and want engineers who think about these details, let's talk.
Top comments (0)