DEV Community

Building a 24/7 Claude Code Wrapper? Here's Why Each Subprocess Burns 50K Tokens

Jaehoon Jung on February 22, 2026

If you're building a wrapper around Claude Code — spawning claude CLI as a subprocess for automation, bots, or multi-agent orchestration — you migh...
Collapse
 
matthewhou profile image
Matthew Hou

50K tokens per subagent turn is painful. The root cause — each subprocess loads the full system prompt plus conversation history — is a known issue with most agent frameworks, not just Claude Code.

The fix you describe (context windowing + summarization) is the standard approach, but there's a tradeoff: if you summarize too aggressively, the agent loses important context. I've found that keeping the last 3-5 tool call/response pairs intact and summarizing everything older than that hits a good balance.

What's your summarization strategy?

Collapse
 
jungjaehoon profile image
Jaehoon Jung

Thanks! To clarify — this isn't about context windowing or summarization. The problem is repeated injection.

When you spawn a CLI subprocess, the system prompt (CLAUDE.md, plugin skills, MCP tool descriptions) gets injected on the first turn — that's fine, the agent needs it. But without isolation, that same config gets re-injected every turn because the CLI re-reads global settings each time. Turn 5 = 5x the same system prompt loaded.

The 4-layer isolation ensures the subprocess only loads what you explicitly provide via --system-prompt on the first turn, and doesn't pick up global config repeatedly. Combined with a persistent process (stream-json mode), the agent keeps its context in one continuous session — no re-injection, no summarization needed. Claude Code handles its own compaction internally.

Collapse
 
jungjaehoon profile image
Jaehoon Jung

One thing worth clarifying: this isn't an inherent limitation of LLM agents. It's a side effect of how the ecosystem is designed — partly by architecture, partly by incentive.

The technical part: The API is stateless — every HTTP request needs full context. But a CLI process can be stateful. In stream-json mode, the process stays alive and holds the conversation in memory. New messages go through stdin; the agent already knows its system prompt, tools, and history. No re-injection needed.

The incentive part: Providers design stateless APIs because it simplifies their infrastructure — no server-side session management. The side effect? Clients re-send system prompts + tool definitions every turn, which means more billable tokens. The "fix" they offer is prompt caching (90% discount on cache hits), but that still assumes you're re-sending everything — it just costs less. There's no push toward persistent sessions because the current design already works in their favor.

That's why most people accept "every turn re-injects everything" as a law of nature. It's not — it's just the default path of least resistance that happens to align with the provider's revenue model.

Collapse
 
signalstack profile image
signalstack

This hits a real problem that doesn't get talked about enough. The MCP tool description overhead is particularly nasty in multi-agent setups — you might have 20+ tools registered for the full system, but any given subagent only needs 3-4 of them for its specific task. Loading all the descriptions anyway burns context budget before the first useful token gets written.

The git boundary trick for blocking upward CLAUDE.md traversal is clever. I've hit the same issue from a different angle — a CLAUDE.md that made sense for interactive dev work was completely wrong context for a tightly scoped automation task. Separating the working directory solves both problems at once.

One thing I'd add: if you're connecting to a real MCP server rather than using Claude's built-in tools, selectively filtering which tools you expose to each subprocess can make another significant dent in overhead. Less a CLI trick, more an MCP server design choice — expose exactly what the task needs, nothing more. Combined with your 4-layer isolation this gets you to a point where each subprocess starts genuinely lean.

Collapse
 
jungjaehoon profile image
Jaehoon Jung

Great point — this is the next layer of optimization I haven't tackled yet.

Right now MAMA uses a tier-based permission system: Tier 1 agents get full tool access, Tier 2/3 are restricted to read-only tools. But the restriction is soft — enforced via prompt instructions, while the full tool descriptions from each MCP server still get injected into every agent's system prompt. So even an agent that only needs search and save from the MAMA MCP server still pays the token cost of every tool it exposes.

Your suggestion to filter at the MCP server level makes sense — if a single MCP server exposes 20 tools, each subagent should only see the 3-4 it actually needs. I'm planning to add an allowed_tools field per agent config so the system prompt only includes relevant tool descriptions. This would:

  1. Cut token waste further (tool descriptions alone can be thousands of tokens)
  2. Reduce hallucinated tool calls — if the model doesn't see a tool definition, it won't try to use it
  3. Complement the 4-layer isolation from this article — that prevents repeated injection, while this prevents unnecessary injection

Clean distinction: "don't re-inject what's already loaded" vs. "don't inject what's not needed in the first place." Thanks for surfacing this.