On March 31, 2026, security researcher Chaofan Shou discovered something Anthropic probably didn't want the world to see: the entire source code of Claude Code — Anthropic's official AI coding CLI — sitting in plain sight on the npm registry via a .map file bundled into the published package.
The model wasn't leaked. The weights are safe. But everything else — the agent architecture, the multi-agent orchestration, the memory system, the internal feature flags — all of it was exposed.
And what it reveals is fascinating: the real competitive advantage in AI agents isn't the engine. It's the harness.
The Car Analogy
Think of an AI agent like a car:
- Engine = The LLM (Claude, GPT, Gemini). Raw power. Expensive to build.
- Harness = The agent framework (Claude Code, OpenClaw, Cursor). How the engine connects to the world.
- Driver's Manual = The behavioral specification. How the agent should drive.
The Claude Code leak exposed the harness — and Anthropic has been building exactly what the open-source community has been building independently.
What's Inside
Dream — Memory Consolidation
Claude Code has a background system called autoDream — a memory consolidation engine with four phases: Orient (read MEMORY.md), Gather (find new info), Consolidate (write/update), Prune (keep under 200 lines).
This is the same MEMORY.md pattern that OpenClaw and Soul Spec have been using. The convergence isn't coincidence.
Buddy — Agent Personality
Claude Code has a hidden Tamagotchi companion with procedurally generated stats and a "soul" — personality generated by Claude on first hatch. They're solving the same problem Soul Spec solves: how do you give an agent a consistent identity?
Undercover Mode — Safety Through Obscurity
Anthropic employees use Claude Code on public repos, and "Undercover Mode" hides that AI is being used. This is the opposite of what users want — the 81k Interviews study showed users want transparency, not hidden identities.
Coordinator Mode — Multi-Agent Orchestration
A full multi-agent system with parallel workers and shared scratchpads. This maps directly to what AGENTS.md defines in Soul Spec.
Same Problems, Different Layers
| Problem | Claude Code (Internal) | Soul Spec (Open Standard) |
|---|---|---|
| Agent memory | Dream + MEMORY.md | MEMORY.md + Swarm Memory |
| Agent identity | Buddy "soul" | SOUL.md + IDENTITY.md |
| Safety rules | Undercover Mode (hidden) | safety.laws (transparent) |
| Multi-agent behavior | Coordinator Mode | AGENTS.md |
| Behavioral consistency | Hardcoded in harness | Portable config files |
Anthropic is solving these problems inside their harness. But the solutions are locked to Claude Code. Switch frameworks, lose everything.
The Portable Layer
What happens when you switch from Claude Code to Cursor? You need a portable, harness-agnostic standard for agent identity:
my-agent/
├── soul.json # Metadata
├── SOUL.md # Personality (portable)
├── IDENTITY.md # Role and context
├── AGENTS.md # Behavioral rules
├── MEMORY.md # Persistent knowledge
└── safety.laws # Safety rules (transparent)
Every file is human-readable. Every file works across Claude Code, OpenClaw, Cursor, Windsurf, or any future harness.
The Harness Era
The model is becoming a commodity. The harness is the product. And the behavioral specification is the soul.
The code is out. The patterns are visible. The question is whether agent behavior stays locked inside proprietary harnesses, or becomes an open standard that users own.
Originally published at blog.clawsouls.ai
References:
Top comments (0)