Claude Code Deep Dive (Part 1): Architecture Overview and the Core Agent Loop
Claude Codeβs leaked source code weighs in at over 510,000 lines of TypeScriptβfar too large to analyze directly.
Interestingly, a community-driven Rust rewrite reduced that complexity to around 20,000 lines, while still preserving the core functionality.
Starting from this simplified version makes one thing much clearer:
What does an AI agent system actually need to work?
Why Start with the Rust Rewrite?
On March 31, 2026, Claude Codeβs full source was unintentionally exposed due to an npm packaging mistake.
The package @anthropic-ai/claude-code v2.1.88 included a 59.8MB source map file, which allowed anyone to reconstruct the original TypeScript codebase.
To clarify:
- The official GitHub repo always existed
- But it only contained compiled bundles and documentation
- The readable source code was not normally accessible
The Problem with the Original Codebase
Most analyses focused on the leaked TypeScript code:
- 510K+ lines
- QueryEngine alone: ~46K lines
- 40+ tools
- Complex plugin system
The result: too much detail, not enough clarity.
Why the Rust Version Is More Useful
Shortly after the leak:
- Developer Sigrid Jin (instructkr community)
- First built a Python clean-room version
- Then pushed a Rust implementation (
claw-code)
π Project overview: claw-code
This version:
- ~20K lines of Rust
-
Retains core functionality:
- Agent loop
- Tool system
- Permission control
- Prompt system
- Session management
- MCP protocol
- Sub-agents
The key benefit:
Rewriting forces simplification. What remains is what actually matters.
Architecture Overview: A 6-Module System
The Rust implementation is structured into six modules:
claw-code/
βββ runtime/ # Core runtime: loop, permissions, config, session, prompt
βββ api/ # LLM client, SSE streaming, OAuth
βββ tools/ # Tool registry and execution
βββ commands/ # Slash commands (/help, /cost)
βββ compat-harness/ # TS β Rust compatibility layer
βββ rusty-claude-cli/ # CLI, REPL, terminal rendering
These modules form a layered architecture:
CLI / REPL (User Interaction)
βββββββββββββββββββββββββββββ
MCP Protocol Β· Sub-agents (Extension Layer)
βββββββββββββββββββββββββββββ
API Client Β· Session Management (Communication Layer)
βββββββββββββββββββββββββββββ
System Prompt Β· Config (Context Layer)
βββββββββββββββββββββββββββββ
Agent Loop Β· Tools Β· Permissions (Core Layer)
A Key Design Decision
The runtime module defines interfaces, not implementations:
-
ApiClientβ LLM communication -
ToolExecutorβ tool execution
Concrete implementations live at the top (CLI layer).
This enables:
- Mock implementations for testing
- Real implementations for production
- Zero changes to core logic
Testability is built into the architectureβnot added later.
The Core: An 88-Line Agent Loop
If you only read one file, read this:
conversation.rs
The entire agent loop is implemented in ~88 lines.
Runtime State: Simpler Than Expected
AgentRuntime {
session # message array (the only state)
api_client # LLM interface
tool_executor # tool execution
permission_policy # access control
system_prompt
max_iterations
usage_tracker
}
The surprising part:
The only state is a message array.
No explicit state machine. No workflow graph.
The Core Loop: run_turn()
Hereβs the simplified logic:
```python id="n6pj6p"
def run_turn(user_input):
session.messages.append(UserMessage(user_input))
while True:
if iterations > max_iterations:
raise Error("Max iterations exceeded")
response = api_client.stream(system_prompt, session.messages)
assistant_message = parse_response(response)
session.messages.append(assistant_message)
tool_calls = extract_tool_uses(assistant_message)
if not tool_calls:
break
for tool_name, input in tool_calls:
permission = authorize(tool_name, input)
if permission == Allow:
result = tool_executor.execute(tool_name, input)
session.messages.append(ToolResult(result))
else:
session.messages.append(
ToolResult(deny_reason, is_error=True)
)
---
## A Concrete Example
User asks:
> βWhat is 2 + 2?β
Execution flow:
| Step | Message State | Description |
| ------ | -------------------------- | ------------------------ |
| Start | `[User("2+2")]` | User input |
| API #1 | + Assistant (calls tool) | Model decides to compute |
| Tool | + ToolResult("4") | Tool executes |
| API #2 | + Assistant("Answer is 4") | Final answer |
| End | Loop exits | No more tool calls |
Termination condition:
> The model decides to stop calling tools.
---
## Key Design Insight #1: Messages = State
Instead of managing state explicitly:
* The system stores everything as messages
* The full state is reconstructible from history
Benefits:
* Easy persistence (save session)
* Easy replay (debugging)
* Easy compression (context trimming)
> One append-only structure solves multiple problems.
---
## Key Design Insight #2: Errors Are Feedback
When a tool is denied:
* The system does **not** crash
* It returns an error as a `ToolResult`
This is fed back to the model.
Result:
* The model adapts
* Chooses alternative strategies
> Failure becomes part of the reasoning loop.
---
## Tool System: 18 Tools, One Pattern
The Rust version implements 18 built-in tools in a unified structure.
---
### Three Layers
```plaintext
1. Tool Registry β defines schema and permissions
2. Dispatcher β routes tool calls
3. Implementation β executes logic
Tool Specification
```json id="i9j1sx"
{
"name": "bash",
"description": "Execute shell commands",
"input_schema": {
"command": "string",
"timeout": "number?"
},
"required_permission": "DangerFullAccess"
}
This schema is passed directly to the LLM.
---
### Why JSON Schema Matters
* Decouples LLM from implementation
* Enables language-agnostic tools
* Standardizes interfaces
> Schema = contract
---
### Dispatcher Pattern
```python id="5g5syv"
def execute_tool(name, input):
match name:
"bash" -> run_bash()
"read_file" -> run_read()
...
Adding a tool:
- Define input struct
- Implement logic
- Add one dispatch line
Sub-Agent Design
Sub-agents reuse the same runtime:
```python id="5y9zsl"
runtime = AgentRuntime(
session = new_session,
tool_executor = restricted_tools,
permission = high,
prompter = None
)
Key constraint:
* Sub-agents cannot spawn sub-agents
This prevents recursion loops.
---
## Permission System: Minimal but Complete
The system uses **5 permission levels**:
* ReadOnly
* WorkspaceWrite
* DangerFullAccess
* Prompt
* Allow
---
### Core Logic
```python id="9t9ahj"
if current >= required:
allow
elif one_level_gap:
ask_user
else:
deny
Design Insight: Gradual Escalation
Instead of:
- All-or-nothing access
It uses:
Controlled escalation
- Small gap β ask user
- Large gap β deny
Sub-Agent Safety Model
Sub-agents:
- Have high permission
- But no user prompt interface
Result:
- Allowed within scope
- Automatically blocked outside
Two mechanisms combine into precise control.
Part 1 Summary
Claude Codeβs core reduces to three components:
Agent Loop β execution engine
Tool System β action layer
Permissions β safety control
Key principles:
- Messages are the only state
- LLM decides when to stop
- Tools are schema-driven
- Errors are part of reasoning
- Permissions are incremental
Final Thought
After stripping away 500K lines of code, what remains is surprisingly small:
A loop, a tool interface, and a permission system.
Thatβs enough to build a functional AI agent.
But making it robust, scalable, and safeβthatβs where the real complexity begins.
Next Part
Claude Code Deep Dive (Part 2): Context Engineering and Design Patterns
- Prompt construction
- Config merging
- Context compression
- Practical design takeaways
References
- Claw Code (Rust rewrite): https://github.com/instructkr/claw-code
- Project site: https://claw-code.codes/
- Claude Code official repo: https://github.com/anthropics/claude-code
Top comments (0)