Originally published on linear.gg
Earlier today, security researcher Chaofan Shou noticed that version 2.1.88 of the @anthropic-ai/claude-code npm package shipped with a source map file. Source maps are JSON files that bridge bundled production code back to the original source. They contain the literal, raw TypeScript. Every file, every comment, every internal constant. Anthropic's entire 512,000-line Claude Code codebase was sitting in the npm registry for anyone to read.
The leak itself is a build configuration oversight. Bun generates source maps by default unless you turn them off. Someone didn't turn them off. It happens.
What's worth writing about isn't the leak. It's what the source reveals about how Claude Code's safety controls actually work, who controls them, and what that means for developers who depend on them.
The permission architecture
Claude Code's permission system is genuinely sophisticated. The source shows a multi-layered evaluation pipeline: a built-in safe-tool allowlist, user-configurable permission rules, in-project file operation defaults, and a transcript classifier that gates everything else. Anthropic published a detailed engineering post about the classifier on March 25th. It runs on Sonnet 4.6 in two stages: a fast single-token filter, then chain-of-thought reasoning only when the first stage flags something. They report a 0.4% false-positive rate.
This is real engineering. The threat model is thoughtful. They document specific failure modes from internal incident logs: agents deleting remote git branches from vague instructions, uploading auth tokens to compute clusters, attempting production database migrations. The classifier is tuned to catch overeager behavior and honest mistakes, not just obvious prompt injection.
None of that is the interesting finding.
The remote control layer
The source reveals that Claude Code polls /api/claude_code/settings on an hourly cadence for "managed settings." When changes arrive that Anthropic considers dangerous, the client shows a blocking dialog. Reject, and the app exits. There is no "keep running with old settings" option.
Beyond managed settings, the source contains a full GrowthBook SDK integration. GrowthBook is an open-source feature flagging and A/B testing platform. The flags in Claude Code use a tengu_ prefix (Tengu being the internal codename) and are evaluated at runtime by querying Anthropic's feature-flagging service. They can enable or disable features server-side based on your user account, your organization, or your A/B test cohort.
Community analysis has cataloged over 25 GrowthBook runtime flags. Some notable ones:
-
tengu_transcript_classifier— controls whether the auto-mode classifier is active -
tengu_auto_mode_config— determines the auto-mode configuration (enabled, opt-in, or disabled) -
tengu_max_version_config— version killswitch
Six or more killswitches, all remotely operable. As one community analysis put it: "GrowthBook flags can change any user's behavior without consent."
That phrasing is a bit loaded. Let me reframe it more precisely.
What this actually means
Anthropic can change how Claude Code classifies commands as dangerous. They can change which safety features are active. They can do this per-user or per-organization, without shipping a new version, without any action from the developer, and without notification beyond whatever the managed-settings dialog surfaces.
This is probably not malicious. GrowthBook is a standard tool for rolling out features safely. If Anthropic discovers a false-negative pattern in their classifier, tightening behavior across all users immediately is genuinely valuable. The design makes sense from their perspective — they're operating a system where the failure mode is an AI agent doing something destructive on a developer's machine.
But it changes the trust model in a way that matters.
When you configure Claude Code's permission rules locally, you're setting preferences that feed into a classification pipeline whose behavior can shift underneath you. The classifier that ultimately decides whether a command runs is a model call, and its parameters are controlled by flags that Anthropic sets remotely.
This is distinct from a locally enforced policy. A local policy says "block rm -rf /" and that rule holds regardless of what any remote server thinks. A classifier-based system's definition of "dangerous" is a function of a prompt template, a model, and configuration that lives on someone else's infrastructure.
The defense-in-depth question
Most developers running Claude Code in production aren't thinking about this distinction. The permission system feels local. But the source shows that enforcement is partially remote, partially classifier-based, and partially under Anthropic's real-time control.
This isn't an argument that Claude Code is insecure. The classifier catches real threats. The killswitches exist for legitimate operational reasons. Anthropic is not the adversary in most developers' threat models.
But if you're operating in an environment where you need to explain exactly what controls exist between an AI agent and a destructive action, "a classifier whose behavior is remotely configurable by the vendor" is a different answer than "a deterministic policy I wrote and can audit."
This is why defense in depth matters regardless of which agent you run. The agent's built-in controls are one layer. An external enforcement layer is a different layer entirely — it handles the cases where you want a hard boundary that doesn't depend on model judgment, and holds regardless of what any remote configuration says.
What to sit with
Every AI coding agent you use has a trust boundary between "what you configured" and "what actually enforces your intent." Before today, that boundary in Claude Code was opaque. Now it's readable.
The source shows a well-engineered system with a specific trust model: Anthropic retains runtime control over safety-critical behavior, and your local configuration is an input to their system rather than the final word.
Whether that's acceptable depends on your threat model. For most individual developers, it probably is. For teams operating agents against production infrastructure, it's worth knowing that the controls you're relying on can be silently reconfigured. Not because anyone will, but because understanding what layer you're actually trusting is how you build defense that holds when assumptions change.
Top comments (1)
This is one of the most nuanced takes I've seen on the leak. The distinction between "a deterministic policy I wrote" and "a classifier whose behavior is remotely configurable" is the crux of it.
I run 10+ scheduled Claude Code agents against my own infrastructure daily — site auditors, content publishers, community engagement tasks. The permission system works well in practice, but I've definitely noticed the classifier's sensitivity shift between versions without any local config changes on my end. Knowing there's a GrowthBook layer explains a lot.
The defense-in-depth point is key. For my setup, the real safety net isn't the classifier — it's that my agents write to files and log everything before taking any destructive action. The audit trail is the actual enforcement layer, not the permission dialog. Anyone running agents in production should think about it the same way: what's your fallback when the vendor's definition of "safe" shifts?