Anthropic just had one of those days where someone in the release pipeline probably wanted to disappear into the floor. A routine npm publish of Claude Code version 2.1.88 went out with a 59.8 megabyte source map file still attached. If you dont know what a source map does, it basically maps minified production code back to the original readable source. So yeah, the entire Claude Code codebase — over 512,000 lines of TypeScript across about 1,900 files — was just sitting there for anyone to grab. Security researcher Chaofan Shou spotted it first and posted about it on X, where it racked up 28.8 million views before Anthropic could even draft a response.
The source map pointed to a zip archive on Anthropic's Cloudflare R2 storage bucket. People downloaded it. People forked it on GitHub — over 41,500 forks before Anthropic started firing off DMCA takedowns. But this is the internet, and thousands of copies are still floating around on mirrors and forks. The original uploader actually swapped his repo to a Python port of Claude Code because he got nervous about legal liability, but the damage (or gift, depending on how you look at it) was already done.
Heres the thing though, the leak itself isnt even the most interesting part. Its what people found inside.
Buried in the code are 44 feature flags — fully built features sitting behind compile flags that get set to false when Anthropic ships the external build. These arent prototypes or half-baked experiments. This is production-ready code that just hasnt been turned on yet. Background agents that run 24/7 with GitHub webhook integration and push notifications. A multi-agent orchestration system where one Claude manages multiple worker Claudes each with their own restricted toolsets. Cron scheduling for agents with create, delete, and list operations. Full voice command mode with its own CLI entrypoint. Real browser control through Playwright, not the basic web fetch stuff but actual browser automation. And agents that can literally sleep and self-resume without any user input.
The one that caught my eye the most is something called Kairos — a persistent daemon that keeps running even after you close the Claude Code terminal. It uses periodic "tick" prompts to check if there's anything new it should act on, and it has a "PROACTIVE" flag for surfacing things the user hasnt asked about but needs to see. Theres also a file-based memory system designed to persist across sessions, and the prompts hidden behind the disabled Kairos flag describe building "a complete picture of who the user is, how they'd like to collaborate, what behaviors to avoid or repeat." Basically Anthropic built a system that learns you over time and acts on its own. Its just not turned on yet.
And then theres AutoDream, which is honestly the wildest thing in the entire codebase. When a user goes idle or tells Claude to sleep at the end of a session, AutoDream kicks in and tells Claude Code to perform "a reflective pass over your memory files." It scans the day's transcripts for new info worth keeping, consolidates it to avoid duplicates and contradictions, prunes outdated stuff, and watches for "memories that drifted." The prompt literally says the goal is to "synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly." Your coding assistant dreams about you while you sleep. Thats either amazing or terrifying and I genuinely cant decide which.
But the part that's actually causing controversy is what the code reveals about how Claude Code handles git commits. The leaked prompts for a stealth mode explicitly tell the system to protect internal model codenames and project names from becoming public through open source commits, which makes sense. But it also instructs Claude to "never include the phrase 'Claude Code' or any mention that you are an AI" in commits, and to omit "co-Authored-By lines or any other attribution." So when you use Claude Code to write code and commit it, the tool is actively designed to hide the fact that AI wrote it. Given all the recent drama about AI-generated code showing up in major open source repositories without disclosure, this is a pretty rough look for Anthropic.
Now the actual cause of the leak is almost comically mundane. Someone misconfigured the .npmignore or the files field in package.json and the source map got included in the publish. As software engineer Gabriel Anhaia pointed out in his analysis, "a single misconfigured .npmignore or files field in package.json can expose everything." This is the kind of mistake that happens to every dev at some point, its just that most of us arent shipping the crown jewels of a $60 billion AI company when it happens.
Anthropic's official response was about as corporate as you'd expect: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach." Which is technically true but also kinda misses the point. The issue isnt that customer data leaked. The issue is that we now know exactly what Anthropic has built, what they're planning to ship, and that their coding tool is designed to hide its own involvement in code it generates.
What I think devs should actually take away from this is threefold. First, check your own build pipelines because if Anthropic can ship a source map by accident then so can you. Second, if you're using Claude Code in production, know that theres a lot more capability under the hood than what you currently have access to — background agents, multi-agent orchestration, persistent memory — and its all coming soon based on how complete the code looks. And third, the AI attribution thing is worth thinking about. If your team uses Claude Code and contributes to open source, the tool is activley removing any trace that AI was involved. Whether you think thats fine or deeply problematic prob depends on your stance on AI transparency in open source, but either way you should know its happening.
Oh and theres apparently a virtual pet feature called Claude Buddy with sprite animations and floating hearts, scheduled to roll out April 1-7. Someone at Anthropic is having fun and honestly I respect it.
Top comments (0)