DEV Community

Sandy Shen
Sandy Shen

Posted on

Reducing bootstrap memory cost in LLM agents

LLM agents are stateless by default. To get continuity, the standard approach is to load everything into the system prompt. Logs, past decisions, project state.

It works, but it is wasteful. We were spending 3,500+ tokens on memory before the agent even started doing anything useful. If you load nothing, you get the opposite problem. The agent forgets preferences and repeats the same mistakes every session.

We stopped trying to tune the context window and changed how memory is handled.

Instead of loading everything at once, we split memory into three parts:
Hot: A small set of curated facts that are always loaded, around 625 tokens.
Warm: Recent logs from the last 7 days, only pulled in when needed.
Cold: Older history stored externally and not loaded by default.
Most of the time, the agent only needs one or two specific pieces of context.

That simple change made a big difference.
In our setup, bootstrap memory cost dropped from around 3,500 tokens to about 125 tokens, roughly a 96 percent reduction.

We are preparing the open-source release of the OpenClaw Auto Memory Manager. Full write up here:
https://zflow.ai/zflow_ai_insights_article_4.html

Top comments (0)