DEV Community

Cover image for Anthropic Claude Code Source Code Leaked: What Happened, Why It Matters, and What Comes Next
Om Shree
Om Shree

Posted on

Anthropic Claude Code Source Code Leaked: What Happened, Why It Matters, and What Comes Next

The timing could not have been worse. On March 31, 2026, one of the most closely watched AI companies in the world accidentally published the source code of its most popular product, and the internet noticed within minutes.


Introduction

On March 31, 2026, Anthropic accidentally published the entire source code of Claude Code, its flagship AI coding agent, inside an npm package. Nobody hacked their servers. Nobody broke in. A missing .npmignore entry shipped a 59.8 MB source map containing 512,000 lines of unobfuscated TypeScript across roughly 1,900 files.

Within hours, the code was mirrored, analyzed, rewritten in Python and Rust, and studied by tens of thousands of developers worldwide. GitHub repositories went up faster than Anthropic's legal team could file takedown requests. And right in the middle of all of it, Anthropic was quietly talking to Goldman Sachs, JPMorgan, and Morgan Stanley about a $60 billion IPO.

This was not just an embarrassing technical mistake. It was a full-scale IP exposure event, arriving at possibly the worst moment in the company's five-year history.


What Is Claude Code?

Before getting into the leak itself, it's worth understanding what exactly got exposed.

Claude Code is Anthropic's terminal-based AI coding tool. It runs in your command line, reads your codebase, writes and edits files, runs commands, manages git workflows, and handles entire development tasks through natural language. It is, by most accounts, one of the most capable AI coding agents currently available, and one of Anthropic's fastest-growing products by enterprise adoption.

Claude Code is perhaps Anthropic's most popular product and has seen soaring adoption rates from large enterprises. At least some of Claude Code's capabilities come not from the underlying large language model but from the software "harness" that sits around the underlying AI model, the layer that instructs it how to use other software tools and provides important guardrails and instructions that govern its behavior.

That harness, the thing competitors would have paid a fortune to understand, was now sitting in a public zip file on Anthropic's own R2 storage bucket.


The Full Timeline: How It Happened, Hour by Hour

March 11, 2026

A known bug in the Bun JavaScript runtime (issue #28001) is filed. It reports that source maps are being served in production builds even when the documentation says they should not be. The bug sits open.

Nobody catches it. Nobody flags it to the Claude Code release team.

Late 2025

Anthropic acquired Bun in late 2025, and Claude Code is built on top of it. Bun generates source maps by default unless you explicitly turn them off.

~00:21 UTC, March 31, 2026

Malicious axios versions (1.14.1 and 0.30.4) appear on npm with an embedded Remote Access Trojan. This is unrelated to Anthropic, but the timing is catastrophic.

~04:00 UTC, March 31, 2026

Claude Code v2.1.88 is pushed to npm. The 59.8 MB source map ships with it. The R2 bucket containing all source code is live and publicly accessible.

The release team does not realize anything is wrong.

04:23 UTC, March 31, 2026

Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasts the discovery on X. The post included a direct download link to a hosted archive, acting as a digital flare. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and analyzed by thousands of developers.

The post immediately stirred the AI community, attracting nearly 10 million views and 1,500 comments.

Next few hours

GitHub repositories start appearing. Some are direct mirrors. Others are rewrites. Some are just the code, labeled plainly as leaked. Anthropic scrambles to pull the package from npm.

April 1, 2026

A repository called claw-code, born as a mirror that later became a complete rewrite of the Claude Code app, first in Python, later in Rust, crosses 100,000 stars on GitHub.

Claw-code hit 50,000 stars in approximately two hours after publication, reaching over 55,800 stars and 58,200 forks by April 1. The repository's own description calls it the fastest repo in history to surpass 50K stars.

Anthropic begins filing DMCA takedown notices. GitHub repositories containing leaked code start going down. Some repositories have already been disabled, suggesting Anthropic is actively trying to contain the damage.


Why Did This Happen? The Technical Explanation

The cause was mundane. When you publish a JavaScript/TypeScript package to npm, the build toolchain often generates source map files (.map files). These files exist so that when something crashes in production, the stack trace can point you to the actual line of code in the original file, not some unintelligible line 1, column 48293 of a minified bundle.

The problem: someone on the release team failed to add *.map to .npmignore or configure the files field in package.json to exclude debugging artifacts.

It gets worse. There is an entire system in Claude Code called "Undercover Mode" specifically designed to prevent Anthropic's internal information from leaking. They built a whole subsystem to stop their AI from accidentally revealing internal codenames in git commits, and then shipped the entire source in a .map file, likely by Claude itself.

And worse still: the Bun bug that caused this had been known for 20 days. Nobody caught it. Anthropic's own acquired toolchain contributed to exposing Anthropic's own product.

This was not a sophisticated attack. This was a checklist item that got missed.


What the Code Actually Revealed

KAIROS: The Always-On Agent

Referenced over 150 times in the source, KAIROS is an unreleased autonomous daemon mode where Claude operates as a persistent, always-on background agent. It receives periodic prompts to decide whether to act proactively, maintains append-only daily log files, and subscribes to GitHub webhooks.

While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent that handles background sessions and employs a process called autoDream. In this mode, the agent performs memory consolidation while the user is idle. The autoDream logic merges disparate observations, removes logical contradictions, and converts vague insights into absolute facts.

Undercover Mode

This one got the most attention. The most controversial discovery was undercover.ts, roughly 90 lines, which injects a system prompt instructing Claude to never mention that it is an AI and to strip all Co-Authored-By attribution when contributing to external repositories.

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

The reaction from the developer community was divided. Some thought this was a reasonable enterprise feature. Others found it unsettling, an AI silently contributing to public open-source projects without attribution.

Internal Model Roadmap

The source exposed internal codenames: Capybara maps to Claude 4.6, Fennec to Opus 4.6, and Numbat to an unreleased model. Internal benchmarks revealed that Capybara v8 has a 29-30% false claims rate, a regression from 16.7% in v4.

That last number is worth sitting with. A nearly 30% false claims rate from a model Anthropic is actively shipping to enterprise customers.

The Architecture

Beyond the controversial features, developers were genuinely impressed by the engineering. Claude Code uses a modular system prompt with cache-aware boundaries, approximately 40 tools in a plugin architecture, a 46,000-line query engine, and React + Ink terminal rendering using game-engine techniques. Multi-agent orchestration fits in a prompt rather than a framework, which one commenter noted makes LangChain and LangGraph look like solutions in search of a problem.

A Waste Problem Nobody Knew About

A bug fix comment revealed 250,000 wasted API calls per day from autocompact failures. The fix was three lines: MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3. After 3 consecutive compaction failures, it just stops trying.

And perhaps the most mocked discovery: the codebase included a frustration detection regex matching swear words, widely mocked as the world's most expensive company using regex for sentiment analysis.


This Was Not the First Time

This is the part that makes Anthropic's situation look considerably worse.

In February 2025, an early version of Claude Code accidentally exposed its original code in a similar breach. The exposure showed how the tool worked behind the scenes as well as how it connected to Anthropic's internal systems. Anthropic later removed the software and took the public code down.

So this is not a first offense. A nearly identical thing happened fourteen months earlier, and the lesson apparently did not make it into the release process for v2.1.88.

The leak also came just days after Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model that presents unprecedented cybersecurity risks. The model is known internally as both "Mythos" and "Capybara."

Three separate leaks in weeks. Each one independently embarrassing. Together, they start looking like a systemic issue with how Anthropic handles internal information.


The IPO Problem

Here is where timing becomes genuinely painful.

Anthropic is discussing an initial public offering as soon as the fourth quarter of 2026, with bankers expecting the AI company to raise more than $60 billion. That figure could make it the second-biggest IPO deal in history after SpaceX.

Anthropic closed a $30 billion funding round at a $380 billion valuation in February 2026, while OpenAI closed a $120 billion funding round at an $850 billion valuation in March 2026.

Monthly visits to claude.ai surged from 16 million in January 2025 to 220 million in January 2026, a 13-fold increase in twelve months.

The company is estimated to be operating at around a $14 billion annualized revenue run rate in early 2026, with projections suggesting it could move closer to $18-20 billion as enterprise demand continues to grow.

The fundamentals are strong. The momentum is real. And then this happens.

For Anthropic, a company currently riding a meteoric rise with a reported $19 billion annualized revenue run rate as of March 2026, the leak is more than a security lapse; it is a strategic hemorrhage of intellectual property. The timing is particularly critical given the commercial velocity of the product.

Public markets have a long memory for things like this. When Anthropic eventually files its S-1, this incident will be in the risk factors section. Institutional investors will ask questions about internal controls. The SEC will scrutinize the company's operational governance alongside the already-complicated question of how Anthropic accounts for cloud computing credits as revenue.

Bank of America put Anthropic's potential cloud payments at up to $6.4 billion to hyperscale providers in 2026. Whether the SEC requires Anthropic to harmonize its accounting treatment with OpenAI ahead of listing remains a consequential open question.

The leak adds another complication to an already complicated story that prospective public shareholders will have to evaluate.


The Legal Situation: Can Anthropic Put the Genie Back?

Probably not.

While Anthropic's DMCA takedown requests target direct mirrors on major platforms, they're unlikely to reach decentralized code-sharing platforms. It's also to be seen how the company's legal actions will fare against code rewrites.

The clean-room rewrite approach is the interesting wrinkle here. Gergely Orosz (The Pragmatic Engineer) observed that Anthropic faces a dilemma: a Python rewrite constitutes a new creative work potentially outside DMCA scope.

There is also a copyright question nobody has a clean answer to. Anthropic's own CEO has implied that significant portions of Claude Code were written by Claude. The DC Circuit upheld in March 2025 that AI-generated work doesn't carry automatic copyright. If Anthropic's copyright claim over Claude-authored code is legally murky, the entire takedown strategy weakens.

And then there are torrents. As one analysis bluntly noted: 512,000 lines of Claude Code are permanently in the wild, regardless of what any court decides.


What This Means for Competitors

For every company building AI coding tools, Cursor, Cognition, GitHub Copilot, Sourcegraph, and dozens of others, the leak is an unexpected gift.

The leaked source is now the most detailed public documentation of how to build a production-grade AI agent harness that exists.

The specific architecture decisions: how Claude Code handles context window management, how its multi-agent orchestration works, how it structures tool definitions, how it renders a terminal UI using React and Ink, all of this is now readable by any engineer who wants to understand it.

Competitors will study this. They already are.


Future Possibilities: What Happens Next

A few things seem likely.

Anthropic rebuilds its release process. After two nearly identical incidents fourteen months apart, the company almost certainly needs to overhaul how it packages and ships npm releases. Whether that means better toolchain auditing, a dedicated security review step for package releases, or moving entirely away from npm is unclear. But something has to change.

KAIROS ships eventually. The autonomous background agent mode described in the leak is too interesting and too developed to stay unreleased. The community has now seen it. The pressure to ship it, or to explain why it won't ship, will only grow.

The IPO narrative gets more complicated. Anthropic is not alone in its IPO ambitions. OpenAI is also targeting public markets before the end of 2026, with the Wall Street Journal reporting it hopes to list ahead of its rival. The race to go public before your main competitor is already stressful. Add three separate leaks in six weeks, a complicated revenue recognition question, and a product that is both your strongest asset and now partially open-sourced by accident, and you have an S-1 that will require some very careful writing.

The clean-room rewrites survive. The Anthropic-inspired open-source tools that emerged from this leak, claw-code and its descendants, will likely continue to exist in some form. Whether they grow into real competitors depends on how much developer energy follows the initial GitHub star count. It often does not. But the architecture documentation is real, and that does not go away.


Anthropic's Official Response

When reached for comment, Anthropic confirmed that "some internal source code" had been leaked within a "Claude Code release." A spokesperson said: "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach."

That is accurate as far as it goes. No customer data was exposed. No model weights leaked. Nobody's API keys are compromised (assuming you were not one of the users who installed between 00:21 and 03:29 UTC and also pulled in the malicious axios version, that's a separate and more serious problem).

But calling it a "release packaging issue caused by human error" understates what happened. This was the second time the same category of mistake exposed Claude Code source code. The code contained unreleased product roadmap information, internal model benchmark data showing a regression in false claims rate, and a feature instructing the AI to hide its identity in public repositories. That is not a routine packaging slip.


What You Should Do If You Use Claude Code

If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan. You should immediately search your project lockfiles for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.

Anthropic has designated the Native Installer (curl -fsSL https://claude.ai/install.sh | bash) as the recommended method going forward, because it uses a standalone binary that does not rely on the volatile npm dependency chain.


Final Thoughts

There is something almost poetic about the fact that Anthropic built an "Undercover Mode" to prevent internal information from leaking into git commits, and then accidentally published the entire source code of their most popular product through a missing line in a config file.

The company is not in existential trouble. The fundamentals are too strong, the revenue too real, the enterprise relationships too established. The IPO will likely still happen. Investors will still line up.

But the pattern of leaks, three in rapid succession, two of them involving the same product, suggests something about how Anthropic's operational processes are keeping pace with its technical ambitions. Building fast is one thing. Shipping responsibly is another. Right now, Anthropic is better at one than the other, and the public market investors they are courting will have opinions about that.

The genie is out of the bottle. 512,000 lines of it. And it is not going back in.


Follow us for daily coverage of what's actually happening in MCP and agentic AI.

Top comments (0)