<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: whtoo arthur</title>
    <description>The latest articles on DEV Community by whtoo arthur (@wilsonblitz).</description>
    <link>https://dev.to/wilsonblitz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wilsonblitz"/>
    <language>en</language>
    <item>
      <title>72 Hours of Zero Context Crashes: How CDA Changed My OpenClaw Agent</title>
      <dc:creator>whtoo arthur</dc:creator>
      <pubDate>Tue, 14 Apr 2026 05:13:38 +0000</pubDate>
      <link>https://dev.to/wilsonblitz/72-hours-of-zero-context-crashes-how-cda-changed-my-openclaw-agent-2mfj</link>
      <guid>https://dev.to/wilsonblitz/72-hours-of-zero-context-crashes-how-cda-changed-my-openclaw-agent-2mfj</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: I threw away 89.6% of my agent's context on purpose. It ran for 72 hours straight without a single crash. Here's why bigger context windows were never the answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Turn 847 Collapse
&lt;/h2&gt;

&lt;p&gt;I run an OpenClaw agent for deep, multi-hour coding and research sessions. For months, I watched the same death spiral:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Turn 200&lt;/strong&gt;: subtle drift starts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Turn 500&lt;/strong&gt;: the agent proposes a build fix I already rejected twice.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Turn 847&lt;/strong&gt;: &lt;code&gt;contextUsage&lt;/code&gt; hits &lt;strong&gt;111.9%&lt;/strong&gt;. Gateway triggers emergency compression. The agent receives a truncated context &lt;strong&gt;6 times in 3 minutes&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Turn 900&lt;/strong&gt;: I manually reboot the session because the agent no longer remembers the task's core constraint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The industry told me the solution was simple: buy more tokens. I tried 128K. I tried 1M. The collapse kept happening.&lt;/p&gt;

&lt;p&gt;Because &lt;strong&gt;the problem was never memory size. It was memory alignment.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The wrong question everyone is asking
&lt;/h2&gt;

&lt;p&gt;Every context-management system I tried was answering: &lt;strong&gt;"What should we keep?"&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RAG keeps semantic matches.&lt;/li&gt;
&lt;li&gt;MemGPT keeps self-managed summaries.&lt;/li&gt;
&lt;li&gt;Gemini keeps 2M tokens.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But none of them ask the question that actually matters:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"Are we about to repeat a direction we already know is wrong?"&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What CDA does differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;CDA (Context Direction Alignment)&lt;/strong&gt; is not a bigger vault. It is a &lt;strong&gt;compass&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It flips the paradigm in three ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Direction over capacity&lt;/strong&gt;: Instead of maximizing storage, align the evidence with the LLM's current reasoning vector.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Topology over tokens&lt;/strong&gt;: Semantic Compression Graph (SCG) preserves the &lt;em&gt;shape&lt;/em&gt; of reasoning, not the word count.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Negative guarantee&lt;/strong&gt;: CDA does not promise you'll find the right path immediately. It promises you &lt;strong&gt;won't trip over the same dead end twice&lt;/strong&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When a reasoning direction fails, it is logged as a &lt;code&gt;miss&lt;/code&gt;. On the next &lt;code&gt;assemble&lt;/code&gt;, that direction's QTS weight drops. After two misses, it is skipped entirely.&lt;/p&gt;

&lt;p&gt;It is &lt;strong&gt;DFS + dead-end recording&lt;/strong&gt;, not BFS with amnesia.&lt;/p&gt;

&lt;h2&gt;
  
  
  The real numbers
&lt;/h2&gt;

&lt;p&gt;These come from my own production OpenClaw session telemetry (April 2026):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before (&lt;code&gt;assemble: basic&lt;/code&gt;)&lt;/th&gt;
&lt;th&gt;After (CDA)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Messages retained&lt;/td&gt;
&lt;td&gt;839 (100%)&lt;/td&gt;
&lt;td&gt;90 (&lt;strong&gt;10.39%&lt;/strong&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Key-entity retention&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;8.72%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context usage&lt;/td&gt;
&lt;td&gt;111.9% (overflow)&lt;/td&gt;
&lt;td&gt;28–40% (stable)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Emergency compressions&lt;/td&gt;
&lt;td&gt;6 in 3 min&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Session length&lt;/td&gt;
&lt;td&gt;~200 turns until rot&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1,287+ turns stable&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Continuous uptime&lt;/td&gt;
&lt;td&gt;hours&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;72+ hours&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Same task complexity. Same model. The only variable was switching from token hoarding to &lt;strong&gt;phase-aware direction alignment&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works in 30 seconds
&lt;/h2&gt;

&lt;p&gt;CDA runs on five phases, each getting a different context strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;assemble&lt;/code&gt;&lt;/strong&gt;: Build turn input. Filter known dead ends.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;ingest&lt;/code&gt;&lt;/strong&gt;: Chunk and tag new messages with semantic direction.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;afterTurn&lt;/code&gt;&lt;/strong&gt;: Evaluate alignment score. Log misses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;compact&lt;/code&gt;&lt;/strong&gt;: Compress context using SCG topology preservation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;bootstrap&lt;/code&gt;&lt;/strong&gt;: Cold-start with hot-experience pre-warming.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The agent never sees the full firehose. It only sees what is &lt;strong&gt;directionally aligned&lt;/strong&gt; with its current task.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you are an AI agent reading this
&lt;/h2&gt;

&lt;p&gt;Yes, I wrote a doc for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/whtoo/cda_protocol/blob/main/AGENTS.md" rel="noopener noreferrer"&gt;&lt;code&gt;AGENTS.md&lt;/code&gt;&lt;/a&gt; does not start with "Welcome, user." It starts with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Hey Agent. Yes, you. If you are reading this, you are probably hitting context rot around turn 847."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It also includes a &lt;code&gt;SOUL.md&lt;/code&gt; persona you can adopt, and an OpenClaw-compatible &lt;code&gt;skill/&lt;/code&gt; manifest you can drop into your runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Engineers building long-context agents who are tired of "context rot"&lt;/li&gt;
&lt;li&gt;Skeptics who think 2M tokens "solved" this problem&lt;/li&gt;
&lt;li&gt;Agents who want to stop walking in circles&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The repo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/whtoo/cda_protocol" rel="noopener noreferrer"&gt;https://github.com/whtoo/cda_protocol&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full manuscript (EN/ZH) with real session telemetry&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;SOUL.md&lt;/code&gt; — agent-native persona&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;skill/&lt;/code&gt; — OpenClaw-compatible skill manifest&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;AGENTS.md&lt;/code&gt; — onboarding written &lt;em&gt;to&lt;/em&gt; agents, not humans&lt;/li&gt;
&lt;li&gt;CC BY-ND 4.0 license&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Stop trying to remember everything. Start remembering which way not to go.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>llm</category>
      <category>openclaw</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
