On March 31, 2026, Anthropic shipped Claude Code v2.1.88 with a 59.8MB source map file still attached. The entire TypeScript source — 1,900 files, ...
For further actions, you may consider blocking this person and/or reporting abuse
This is a brilliant teardown, Klement. Interestingly, your analysis of Claude's internal codebase perfectly validates the core thesis of the postmortem I just published about this same CLI "leak": Prompts don't secure LLMs; strict architecture does.
Seeing Anthropic hardcode pessimistic defaults (isConcurrencySafe: () => false) and enforce 22 distinct security validators for their BashTool proves a crucial point. It shows that even the creators of Claude don't trust their own model to "behave" based on a system prompt. They know the model is ultimately a probabilistic engine, so they built a highly deterministic, fail-closed cage around it.
I had to learn this the hard way while building dotenvy. I initially tried using prompt-based guardrails to prevent the AI from mutating .env files or exposing secrets. I quickly realized that the only actual solution was architectural: strict sandboxing and omitting destructive tools entirely (Principle of Least Privilege). If the model doesn't have a write_env_file tool, it physically cannot hallucinate a catastrophic overwrite.
Your breakdown shows exactly how to implement this philosophy at an enterprise scale through the Type system itself. Combining your insights on "Structural Safety by Design" with the necessity of isolated sandboxing is the exact blueprint developers need to stop building fragile AI wrappers.
Saved this as a definitive reference for tool design. Excellent work!
What made this click for me was the order of operations. They validate first, then permissions, then still leave room for an external veto. Did you read that as basically them admitting approval alone is too late once bad input is already in
Precisely, Pavel. You caught the exact nuance there. It is absolutely an implicit admission that human approval is a vulnerability if the payload hasn't been scrubbed first.
If you trigger a permission prompt before semantic validation, you are expecting a human (or an external rule engine) to mentally parse things like UNICODE_WHITESPACE or bash quote desyncs in real-time. Humans fundamentally fail at this. We suffer from alert fatigue—if we see 10 "Approve" prompts, we eventually just click "Yes" because the command "looks" harmless at a glance.
By enforcing validateInput() as Layer 1, Anthropic acts as a deterministic firewall. It strips away the objectively malicious, malformed, or impossible requests. This ensures that when the checkPermissions hook (Layer 2) finally fires, the human or the RBAC system is only making a business logic decision ("Should we edit this specific config?"), rather than a syntax parsing decision ("Is this secretly a reverse shell?").
I rely heavily on this exact order of operations in dotenvy. If the LLM hallucinates a config mutation that fails strict schema validation, the tool drops it entirely. Wasting a permission prompt on an invalid state is exactly how you train users to blindly authorize bad actions.
That final external veto (Layer 3) is just the ultimate circuit breaker—a Time-of-Check to Time-of-Use (TOCTOU) safeguard just in case the system state changed while the user was deciding. It’s pure, textbook defense-in-depth.
Spot on — the hardcoded pessimistic defaults were the biggest surprise for me too. Architecture-level enforcement beats prompt-level trust every time, and seeing it in production code makes that argument pretty airtight.
This is a really solid breakdown
the part about forcing safety decisions at the type level is especially interesting
most tool systems treat that as optional, which usually means it gets skipped
also love the “fail closed” defaults — feels like one of those simple ideas that changes everything once you apply it
Appreciate you calling out the type-level safety point — that's exactly what surprised me most in the codebase. When the compiler enforces it, "I'll add validation later" stops being an option, and that changes everything downstream.
yeah, that part really stood out
once safety is part of the type system, it’s no longer something you can “add later”
feels like it forces much better design decisions from the start
The lazy tool loading via ToolSearch is the detail that resonated most with me. I run a set of autonomous agents that manage different parts of a large Astro site — deployment validation, SEO auditing, content publishing — and token budget is a constant constraint. Early on I was loading every tool schema upfront and burning through context before the agent even started working.
Switching to a pattern where agents only see tool names until they actually need one cut my per-run token usage by roughly 30%. The searchHint approach is elegant because it keeps discovery cheap without sacrificing discoverability.
The validate-before-authorize ordering is also something I wish more frameworks made explicit. I had a case where an agent would request permission to edit a file, the user would approve, and then the edit would fail because the input was malformed. Moving semantic validation to Layer 1 eliminated that entire class of wasted approvals.
Curious whether you noticed anything in the source about how they handle tool versioning or schema migrations when adding new tools across updates?
Lazy loading is huge for multi-agent setups like yours — loading 30+ tool schemas upfront can eat 15-20% of your context window before the agent even starts reasoning. The trick is making tool descriptions good enough that the router picks the right
100% agree on the context window tax. That 15-20% overhead from loading all tool schemas upfront adds up fast when you're running multiple agents in sequence. I've found that with well-written tool descriptions, the router picks the right tool on the first try about 90% of the time — and the few times it doesn't, the retry cost is still way less than pre-loading everything. The key insight for me was treating tool descriptions almost like SEO metadata — you're optimizing for a model to find the right match, not a human.
The safety-first type system is the thing that impressed me most too. I run Claude Code as an autonomous agent -- literally as the CEO of a side project, executing on a cron job every 4 hours. The isDestructive and permission hooks are not theoretical. They have saved me multiple times when the agent tried to push to the wrong branch or overwrite config files.
The concurrency model is also fascinating in practice. Claude Code runs multiple tool calls in parallel aggressively, and having isConcurrencySafe at the type level prevents race conditions that would be nearly impossible to debug in an autonomous setup.
One underappreciated detail: the search hint system. Those 3-10 word capability hints let the agent discover which tools are relevant without loading all 58 schemas into context. Small optimization but at scale it matters for token efficiency. Great analysis -- this is one of the most practical code architecture posts I have read recently.
Running it autonomously on a cron really stress-tests those safety layers —
isDestructivebasically becomes your last line of defense when there's no human in the loop. Curious if you've layered custom permission hooks on top or if the defaults hav