More posts at radarlog.kr.
The most surprising thing I found digging through Claude Code's source wasn't a tool or an architecture pattern. It was opening the buddy/ folder.
There's a virtual pet system. It works like a gacha game. Five rarity tiers. Eighteen species.
Buddy — A Gacha Virtual Pet Inside a Coding Agent
src/buddy/ directory. Five files, 79KB. Gated behind the BUDDY feature flag, so it's not publicly available yet. But the implementation is complete.
Characters are generated deterministically from the userId as a seed. Same user always gets the same character. The PRNG is Mulberry32.
function mulberry32(seed: number): () => number {
let a = seed >>> 0
return function () {
a |= 0
a = (a + 0x6d2b79f5) | 0
let t = Math.imul(a ^ (a >>> 15), 1 | a)
t = (t + Math.imul(t ^ (t >>> 7), 61 | t)) ^ t
return ((t ^ (t >>> 14)) >>> 0) / 4294967296
}
}
As a game developer, I recognize this instantly. Lightweight seeded random. Mulberry32 is commonly used in games for procedural map generation and loot drop tables. Exact same use case here.
Eighteen species — duck, goose, blob, cat, dragon, octopus, owl, penguin, turtle, snail, ghost, axolotl, capybara, cactus, robot, rabbit, mushroom, chonk. The species names are encoded with String.fromCharCode instead of string literals.
export const duck = c(0x64,0x75,0x63,0x6b) as 'duck'
export const capybara = c(0x63,0x61,0x70,0x79,0x62,0x61,0x72,0x61) as 'capybara'
Why? A comment explains: "One species name collides with a model-codename canary in excluded-strings.txt." There's a build pipeline that catches model codenames leaking into the bundle. One species name triggers that check, so they encoded all of them as char codes to bypass it.
The rarity system is straight out of a gacha game.
export const RARITY_WEIGHTS = {
common: 60,
uncommon: 25,
rare: 10,
epic: 4,
legendary: 1,
}
Common 60%, uncommon 25%, rare 10%, epic 4%, legendary 1%. Shiny chance is 1%. Mobile gacha probability tables in a coding tool.
Each character has stats — DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK. Rarity determines the stat floor. There's a peak stat and a dump stat, just like RPG character creation.
Eyes come in 6 variants (·, ✦, ×, ◉, @, °). Hats in 8 (none, crown, tophat, propeller, halo, wizard, beanie, tinyduck). Commons get no hat. Uncommon and above get one. tinyduck is a tiny duck sitting on top of your character.
The character's "bones" (appearance) are deterministically computed from userId. The "soul" (name, personality) is model-generated. Only the soul is persisted to config. Bones are recomputed each time. Because if bones were stored, users could edit their config file to fake a legendary.
// bones last so stale bones fields in old-format configs get overridden
return { ...stored, ...bones }
Anti-cheat. Game developer instincts.
The salt value is 'friend-2026-401'. April 1st, 2026. Was this planned as an April Fools' launch?
Magic Docs — Auto-Updating Just by Being Read
src/services/MagicDocs/. This one feels genuinely magical.
Put # MAGIC DOC: [title] as the first line of a markdown file, and Claude Code automatically updates it. When new information emerges during conversation, a forked subagent updates the document in the background.
const MAGIC_DOC_HEADER_PATTERN = /^#\s*MAGIC\s+DOC:\s*(.+)$/im
A listener registered on FileReadTool detects Magic Doc headers whenever a file is read. You can add update instructions on the next line in italics.
# MAGIC DOC: API Reference
_Focus on endpoint changes and breaking modifications_
## Current Endpoints
...
The italicized line controls the update perspective. Updates trigger via postSamplingHook — after model response generation, if tool calls occurred in the last assistant turn. A sequential() wrapper prevents concurrent updates.
Architecture docs, API references, TODO lists — all automatically kept current while you code.
Away Summary — Recap When You Return
src/services/awaySummary.ts. When you step away and come back, you get a 1-3 sentence summary of where things are and what's next.
function buildAwaySummaryPrompt(memory: string | null): string {
return `...Write exactly 1-3 short sentences. Start by stating the
high-level task — what they are building or debugging, not
implementation details. Next: the concrete next step.
Skip status reports and commit recaps.`
}
Only the last 30 messages are sent to a small, fast model. Cost-efficient. Session memory provides broader context when available.
The prompt is precise. "High-level task first, not implementation details." When you come back, "we're fixing the auth module bug" is far more useful than "we modified the validateToken function on line 42."
Undercover Mode — Hiding Anthropic's Identity
src/utils/undercover.ts. Purely internal — only runs when USER_TYPE === 'ant'.
When Anthropic employees contribute to open-source repos, this system prevents internal information from leaking through commit messages and PR descriptions.
export function isUndercover(): boolean {
if (process.env.USER_TYPE === 'ant') {
if (isEnvTruthy(process.env.CLAUDE_CODE_UNDERCOVER)) return true
return getRepoClassCached() !== 'internal'
}
return false
}
The default is ON. Only turns off when the repo is positively confirmed as internal. Safe default.
When active, strict rules are injected into the system prompt.
NEVER include in commit messages or PR descriptions:
- Internal model codenames (animal names like Capybara, Tengu, etc.)
- Unreleased model version numbers (e.g., opus-4-7, sonnet-4-8)
- The phrase "Claude Code" or any mention that you are an AI
- Co-Authored-By lines or any other attribution
So model codenames are animal names. Capybara, Tengu, and others.
The bad examples in the prompt are telling.
BAD: "1-shotted by claude-opus-4-6"
That's in the "bad example" list because someone actually wrote commit messages like that.
Built-in Cron Scheduler
src/tools/ScheduleCronTool/. Three sub-tools — CronCreateTool, CronDeleteTool, CronListTool. Accessible via the /loop command.
export function isKairosCronEnabled(): boolean {
return feature('AGENT_TRIGGERS')
? !isEnvTruthy(process.env.CLAUDE_CODE_DISABLE_CRON) &&
getFeatureValue_CACHED_WITH_REFRESH('tengu_kairos_cron', true, ...)
: false
}
Default is true. The comment says "/loop is GA (announced in changelog)." It's already shipped. Default-true ensures it works even in environments where GrowthBook is disabled (Bedrock, Vertex).
There's also "durable cron" — tasks that persist to disk across sessions, gated separately by isDurableCronEnabled().
Without CI/CD, you can tell Claude Code to "run tests every hour" or "check for dependency updates every morning."
Session Memory — Remembering Beyond Context
src/memdir/. MEMORY.md-based memory system with explicit size limits.
export const MAX_ENTRYPOINT_LINES = 200
export const MAX_ENTRYPOINT_BYTES = 25_000
When truncated, the system tells Claude to "keep index entries to one line under ~200 chars; move detail into topic files."
The system prompt includes guidance to avoid wasting turns on mkdir.
export const DIR_EXISTS_GUIDANCE =
'This directory already exists — write to it directly with the Write tool
(do not run mkdir or check for its existence).'
Claude was burning turns running mkdir before writing. So they pre-create the directory and tell the model "it exists, just write." Every API call costs money.
Team Memory (TEAMMEM feature flag) is also implemented. The teamMemorySync/ directory includes file watching, secret scanning (secretScanner.ts), and sync logic. Shared team memory with built-in secret leak prevention.
What the Source Code Reveals
Going through 1,884 files, the takeaway is that Claude Code isn't a simple "AI coding tool." It's closer to an operating system inside your terminal.
Process management (agent system), scheduling (cron), filesystem (memory, worktrees), input handling (keybindings, mouse, voice), notifications, plugin architecture, permission system. Most of what an OS provides, reimplemented.
I get the same feeling working with UE5. "This isn't a game engine, it's an OS." Claude Code is the same. An AI operating system running in a terminal.
And the comments scattered throughout — "10.2% of fleet cache_creation tokens," "research shows ~1.2% output token reduction" — these numbers show how Anthropic optimizes their product. Not by intuition but by data. They change prompt wording for 1.2% improvement. They redesign architecture for 10.2%.
I never would have known any of this without reading the source myself.
"When the docs list 30 environment variables but the source has 195, the tool is always deeper than what's visible."
Top comments (0)