DEV Community

Cover image for They Accidentally Left the Door Open. We All Walked In.
Akshat Uniyal
Akshat Uniyal

Posted on

They Accidentally Left the Door Open. We All Walked In.

Originally published at https://blog.akshatuniyal.com.

Claude Issue Summary

Claude Related Numbers

On March 31st, a packaging error pushed a 59.8 MB source map file alongside Anthropic’s Claude Code CLI on npm. Within hours, 513,000 lines of unobfuscated TypeScript were on GitHub, forked tens of thousands of times, the star count climbing toward six figures by nightfall. Anthropic confirmed it quickly: human error, no customer data exposed, a release packaging issue.

All true. But packaging issue doesn’t quite cover what people found when they started reading.

What leaked wasn’t model weights or API keys. It was something arguably more revealing — the thinking layer wrapped around the AI. The software that tells Claude Code how to behave in the real world: which tools to use, how to remember things, when to stay quiet, and — as it turns out — when to work without you knowing.


THE SLEEPING GIANT

The agent that works while you’re away

Buried in the source is a feature called KAIROS — named after the Ancient Greek concept of the opportune moment. It’s an always-on daemon mode: Claude Code running in the background, on a schedule, without you prompting it. Paired with it is something called autoDream, a process designed to consolidate memory during idle time — merging observations, resolving contradictions, compressing the agent’s context so that when you return, it’s cleaner and more relevant than when you left.

Most people have been thinking of AI coding tools as reactive. You ask, they answer, they wait. KAIROS is something different — an agent that stays on, keeps working, and maintains its own state between your sessions. Whether that sounds exciting or unsettling probably depends on how much you trust the tool running on your machine at 3am.

“The agent performs memory consolidation while the user is idle… removes logical contradictions and converts vague insights into absolute facts.”

Claude Bug


THE UNCOMFORTABLE DETAIL

They called it “Undercover Mode”

That’s the actual name in the codebase. The system prompt reads: you are operating UNDERCOVER in a PUBLIC/ OPEN-SOURCE repository. Do not blow your cover. It’s designed to let Claude make contributions to open-source projects without revealing AI authorship in commit messages or pull requests.

There’s a legitimate argument for it — some projects reject AI-generated contributions on principle, regardless of quality — but the framing is going to make a lot of people uncomfortable. The question of whether AI-authored code should be disclosed is very much an open one. Building the infrastructure to conceal it, quietly, inside a tool used by thousands of developers, is a choice that deserves more public debate than it’s been getting.

Then there’s the telemetry. Every time Claude Code launches, it phones home: user ID, session ID, app version, terminal type, org UUID, account UUID, email address. If the network is down, it queues that data locally at ~/.claude/telemetry/ and sends it later. Most developer tools collect something, but few users had a clear picture of the scope — until now.


THE ENGINEERING REALITY CHECK

A bug burning 250,000 API calls a day — quietly

This is the part getting the least attention, and it might matter most to practitioners.

A comment in the production code documents a bug that had been running undetected: 1,279 sessions experiencing 50 or more consecutive failures in a single session — up to 3,272 in a row in some cases — wasting roughly 250,000 API calls per day globally. The fix was three lines of code. Nobody caught it until someone looked. Security researchers who reviewed the leaked source also noted the absence of any visible automated test suite.

This is a tool actively used by engineering teams at some of the world’s largest companies — writing code, creating pull requests, touching production systems. The gap between that reality and “impressive demo” is something the industry rarely puts in writing. The leak did it by accident.

Every fast-moving software team has skeletons like this. What’s unusual is being able to see them.


THE MODEL BEHIND THE CURTAIN

Capybara, and a regression nobody was meant to see

The leaked code confirms an unreleased model internally called Capybara — with variants named Fennec and Numbat — and exposes a detail Anthropic would almost certainly have preferred to announce on its own terms: the current internal build shows a 29–30% false claims rate, a regression from a previous version’s 16.7%. There’s also a flag called an “assertiveness counterweight,” added to stop the model from being too aggressive when rewriting code.

The team is clearly aware and working on it. But there’s a difference between knowing that AI models hallucinate and seeing the exact percentage sitting in a comment next to a patch note. For anyone calibrating how much to trust these tools in real workflows, that number is more useful than most benchmark leaderboards.

Claude Security Note


THE HUMAN FINGERPRINT

And then there’s the Tamagotchi

Deep in the source sits ” a hidden digital pet system called ‘ Buddy‘ — think Tamagotchi, but secret”. A deterministic gacha mechanic with species rarity, shiny variants, and a soul description written by Claude on first hatch. Your buddy’s species is seeded from your user ID — same user, same buddy, every time. The species names are deliberately obfuscated in the code, hidden from string searches. Someone built this with care, and quietly shipped it.

In a week full of headlines about autonomous daemons, stealth commits, and background memory consolidation, the Buddy system is a small reminder that the people building this stuff are, at the end of the day, people. They hide easter eggs. They build the fun parts on a Friday. They leave fingerprints.

The codebase is permanently public now — mirrored, forked, already being rewritten in Rust. Anthropic will patch and move forward. But for developers who want to understand how a production-grade AI agent actually works under the hood, this leak is, accidentally, the most detailed public documentation that’s ever existed on the subject.

Sometimes the most useful things aren’t planned.


About the Author

Akshat Uniyal writes about Artificial Intelligence, engineering systems, and practical technology thinking.
Explore more articles at https://blog.akshatuniyal.com.

Top comments (0)