<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: diggidydale</title>
    <description>The latest articles on DEV Community by diggidydale (@diggidydale).</description>
    <link>https://dev.to/diggidydale</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/diggidydale"/>
    <language>en</language>
    <item>
      <title>Escaping the Dumbzone, Part 4: Advanced Patterns</title>
      <dc:creator>diggidydale</dc:creator>
      <pubDate>Thu, 05 Feb 2026 14:17:42 +0000</pubDate>
      <link>https://dev.to/diggidydale/escaping-the-dumbzone-part-4-advanced-patterns-151m</link>
      <guid>https://dev.to/diggidydale/escaping-the-dumbzone-part-4-advanced-patterns-151m</guid>
      <description>&lt;p&gt;&lt;em&gt;Part 4 of 4 in the "Escaping the Dumbzone" series&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;We've covered the fundamentals: understanding the Dumbzone (&lt;a href="https://dev.to/diggidydale/escaping-the-dumbzone-part-1-why-your-ai-gets-stupider-the-more-you-talk-to-it-4d8k"&gt;Part 1&lt;/a&gt;), isolating work with subagents (&lt;a href="https://dev.to/diggidydale/escaping-the-dumbzone-part-2-subagents-divide-and-conquer-1p29"&gt;Part 2&lt;/a&gt;), and managing knowledge across sessions (&lt;a href="https://dev.to/diggidydale/escaping-the-dumbzone-part-3-knowledge-management-configuration-1n6p"&gt;Part 3&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Now let's go deeper. This part covers techniques for power users: controlling what flows into your context, running autonomous loops, and architectural patterns for production AI systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Backpressure: Controlling What Flows In
&lt;/h2&gt;

&lt;p&gt;Here's something that sounds boring but matters a lot: &lt;strong&gt;most tool output is rubbish&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A typical test run dumps 200+ lines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;PASS src/utils/helper.test.ts
PASS src/utils/format.test.ts
PASS src/utils/validate.test.ts
PASS src/utils/parse.test.ts
PASS src/utils/transform.test.ts
... (195 more lines)
PASS src/components/Button.test.ts

Test Suites: 47 passed, 47 total
Tests:       284 passed, 284 total
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's 2-3% of your context for information you could convey in one character: ✓&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backpressure&lt;/strong&gt; means controlling what flows into your context from tool outputs. HumanLayer's philosophy:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Deterministic is better than non-deterministic. If you already know what matters, don't leave it to a model to churn through 1000s of junk tokens to decide."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The run_silent() Pattern
&lt;/h2&gt;

&lt;p&gt;Wrap commands to filter their output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;run_silent&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nv"&gt;output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; 2&amp;gt;&amp;amp;1&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;$?&lt;/span&gt; &lt;span class="nt"&gt;-eq&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"✓"&lt;/span&gt;  &lt;span class="c"&gt;# Success: one character&lt;/span&gt;
  &lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$output&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;  &lt;span class="c"&gt;# Failure: full output for debugging&lt;/span&gt;
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Success&lt;/strong&gt;: Just ✓ — Claude knows tests passed, that's all it needs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Failure&lt;/strong&gt;: Full output — Claude needs details to debug&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33sk1by88djdpgfo7a4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F33sk1by88djdpgfo7a4h.png" alt="Backpressure Filtering" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  More Backpressure Techniques
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use failFast modes:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pytest &lt;span class="nt"&gt;-x&lt;/span&gt;        &lt;span class="c"&gt;# Stop on first failure&lt;/span&gt;
jest &lt;span class="nt"&gt;--bail&lt;/span&gt;      &lt;span class="c"&gt;# Same for Jest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Surface one failure at a time. Fix it, run again. No need to load 47 failures into context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filter stack traces:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Strip timing info, generic frames&lt;/span&gt;
your_command 2&amp;gt;&amp;amp;1 | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"^&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="s2"&gt;*at "&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"ms$"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Claude Code hooks:&lt;/strong&gt;&lt;br&gt;
You can automate this with pre/post command hooks. Filter output before it ever hits context.&lt;/p&gt;
&lt;h3&gt;
  
  
  The ROI
&lt;/h3&gt;

&lt;p&gt;HumanLayer's take: human time managing agents in bloated contexts costs 10x more than setting up backpressure upfront.&lt;/p&gt;

&lt;p&gt;Spend 30 minutes on wrapper scripts. Save hours of context management.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Ralph Loop
&lt;/h2&gt;

&lt;p&gt;Here's a completely different approach to the Dumbzone problem: &lt;strong&gt;what if you just didn't manage context at all?&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Meet Ralph
&lt;/h3&gt;

&lt;p&gt;Created by Geoff Huntley in 2025, the "Ralph Wiggum Technique" is beautifully simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;while&lt;/span&gt; :&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;PROMPT.md | claude-code &lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Run Claude in a loop. Each iteration gets a fresh context. Progress tracked through git.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9ehwwo0lz0otqh326lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff9ehwwo0lz0otqh326lb.png" alt="Ralph Loop Flow" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Is It Called Ralph Wiggum?
&lt;/h3&gt;

&lt;p&gt;From The Simpsons, Ralph is perpetually confused, always making mistakes but never stopping. That's the vibe.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"The technique is deterministically bad in an undeterministic world."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI agents are probabilistic. They don't always make the same decision twice. They hallucinate. They take wrong turns.&lt;/p&gt;

&lt;p&gt;But in a loop, failures become predictable. You know the agent will fail sometimes. Fine. The loop catches it, tries again.&lt;/p&gt;

&lt;p&gt;It's better to fail predictably and recover automatically than to succeed unpredictably and need manual intervention.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Ralph Preserves Context
&lt;/h3&gt;

&lt;p&gt;Instead of cramming knowledge into context:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Git history&lt;/strong&gt; — Previous changes visible via &lt;code&gt;git diff&lt;/code&gt; and &lt;code&gt;git log&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;File state&lt;/strong&gt; — The agent reads actual files, not stale conversation history&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PROMPT.md&lt;/strong&gt; — Clear specifications persist across iterations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fresh context&lt;/strong&gt; — Each iteration starts clean, zero Dumbzone risk&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The filesystem is the memory. Git is the log. Each agent instance is stateless.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Use Ralph
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Good for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clear objectives with defined success criteria&lt;/li&gt;
&lt;li&gt;Iterative refinements (upgrading deps, refactoring patterns)&lt;/li&gt;
&lt;li&gt;Tasks where "keep going until done" makes sense&lt;/li&gt;
&lt;li&gt;Long-running autonomous work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Not good for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exploratory work without clear goals&lt;/li&gt;
&lt;li&gt;Tasks requiring reasoning chains across iterations&lt;/li&gt;
&lt;li&gt;When you're watching the API bill nervously&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Results Are Wild
&lt;/h3&gt;

&lt;p&gt;Real examples from the community:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$50k contract completed for $297 in API costs&lt;/li&gt;
&lt;li&gt;14-hour autonomous session upgrading React 16 → 19&lt;/li&gt;
&lt;li&gt;Complete programming language (Cursed Lang) generated overnight&lt;/li&gt;
&lt;li&gt;Multiple repos shipped while developers slept&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Overbaking Problem
&lt;/h3&gt;

&lt;p&gt;Leave Ralph running too long and weird things happen. One user reported their agent spontaneously added cryptographic features nobody asked for.&lt;/p&gt;

&lt;p&gt;This isn't a bug, it's emergent behaviour from extended iteration. Set clear stopping conditions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Awareness
&lt;/h3&gt;

&lt;p&gt;A 50-iteration loop can run $50-100 depending on context per iteration. Start small until you understand the economics.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 12 Factor Agents Framework
&lt;/h2&gt;

&lt;p&gt;HumanLayer wrote a manifesto for production AI systems. Three factors matter most for context management:&lt;/p&gt;

&lt;h3&gt;
  
  
  Factor 3: Own Your Context Window
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"Control how information is structured and presented. Don't let frameworks abstract this away."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Most frameworks hide context management. That's fine for demos, dangerous for production. You need to know:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What's going into context&lt;/li&gt;
&lt;li&gt;How it's structured&lt;/li&gt;
&lt;li&gt;When it gets evicted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your framework doesn't give you this visibility, find a different framework.&lt;/p&gt;

&lt;h3&gt;
  
  
  Factor 10: Small, Focused Agents
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"Keep agents to 3-20 steps handling specific domains rather than monolithic systems."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Big agents with long contexts "spin out trying the same broken approach over and over." The solution isn't a bigger context window, it's smaller agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wnwtztxosw5mwmbk0nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wnwtztxosw5mwmbk0nt.png" alt="Micro-Agents in DAG" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Embed micro-agents in a DAG (directed acyclic graph):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orchestrator decides what needs doing&lt;/li&gt;
&lt;li&gt;Specialised agents handle focused tasks&lt;/li&gt;
&lt;li&gt;Results flow between agents, not full contexts&lt;/li&gt;
&lt;li&gt;Each agent stays in its smart zone&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Factor 12: Stateless Reducer Pattern
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;"Design agents as functions transforming state deterministically. Don't rely on conversation history for critical state."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Conversation history is unreliable. The middle gets ignored. Context rots. Tokens overflow.&lt;/p&gt;

&lt;p&gt;Instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External state store (database, files, git)&lt;/li&gt;
&lt;li&gt;Agent reads state, performs action, writes state&lt;/li&gt;
&lt;li&gt;No dependence on conversation memory for anything critical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is why Ralph works. The agent is stateless. Git is the state.&lt;/p&gt;




&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;Here's how these patterns combine:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tbh0o9rqx2o3u9sjajf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tbh0o9rqx2o3u9sjajf.png" alt="Complete Context Engineering Stack" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Orchestrator&lt;/strong&gt; stays in the smart zone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subagents&lt;/strong&gt; handle exploration in isolation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backpressure&lt;/strong&gt; filters garbage before it enters context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge files&lt;/strong&gt; persist learnings across sessions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External state&lt;/strong&gt; survives everything&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You're not fighting the Dumbzone. You're engineering around it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Cheatsheet
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Starting a Session
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Check the context meter and do a fresh start if above 50%&lt;/li&gt;
&lt;li&gt;Ensure CLAUDE.md is current&lt;/li&gt;
&lt;li&gt;Review memory bank files&lt;/li&gt;
&lt;li&gt;Decide: subagents for research? Ralph for iteration?&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  During Work
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Explore agent for investigation&lt;/li&gt;
&lt;li&gt;Filter verbose tool outputs&lt;/li&gt;
&lt;li&gt;Crystallise insights to files&lt;/li&gt;
&lt;li&gt;Compact at 70%&lt;/li&gt;
&lt;li&gt;Clear when switching tasks&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Ending a Session
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Update CLAUDE.md with new learnings&lt;/li&gt;
&lt;li&gt;Update memory bank files&lt;/li&gt;
&lt;li&gt;Commit everything&lt;/li&gt;
&lt;li&gt;Consider: what will future you need?&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  TL;DR for the Series
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4iorvrohm98y3rqnaf9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4iorvrohm98y3rqnaf9.png" alt="The Complete Framework" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Dumbzone is real&lt;/strong&gt; — Stay under 75k tokens, watch for symptoms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subagents isolate exploration&lt;/strong&gt; — 30 tokens of insight, not 30k of investigation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crystallise knowledge&lt;/strong&gt; — Memory bank + CLAUDE.md for persistence&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Control backpressure&lt;/strong&gt; — Filter verbose output before it hits context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consider Ralph&lt;/strong&gt; — Sometimes iteration beats context management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small agents beat big contexts&lt;/strong&gt; — 3-20 step focused agents in a DAG&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Own your context window&lt;/strong&gt; — It's an engineering discipline&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;The context window isn't just a technical constraint. It's a forcing function for clarity.&lt;/p&gt;

&lt;p&gt;When you can't dump everything in, you have to decide what matters. When you have to crystallise learnings, you actually think about what you learned. When you design small focused agents, you clarify what each piece should do.&lt;/p&gt;

&lt;p&gt;The Dumbzone exists. But escaping it makes you a better engineer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Complete Reading List
&lt;/h2&gt;

&lt;h3&gt;
  
  
  HumanLayer (Start Here)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md" rel="noopener noreferrer"&gt;Writing a good CLAUDE.md&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.humanlayer.dev/blog/12-factor-agents" rel="noopener noreferrer"&gt;12 Factor Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.humanlayer.dev/blog/context-efficient-backpressure" rel="noopener noreferrer"&gt;Context-Efficient Backpressure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.humanlayer.dev/blog/brief-history-of-ralph" rel="noopener noreferrer"&gt;A Brief History of Ralph&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Official Docs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.anthropic.com/engineering/claude-code-best-practices" rel="noopener noreferrer"&gt;Claude Code Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/sub-agents" rel="noopener noreferrer"&gt;Create Custom Subagents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://code.claude.com/docs/en/memory" rel="noopener noreferrer"&gt;Manage Claude's Memory&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Research
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://arxiv.org/abs/2307.03172" rel="noopener noreferrer"&gt;Lost in the Middle&lt;/a&gt; — Stanford paper&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.jetbrains.com/research/2025/12/efficient-context-management/" rel="noopener noreferrer"&gt;JetBrains Context Management&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Community
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://cline.bot/blog/memory-bank-how-to-make-cline-an-ai-agent-that-never-forgets" rel="noopener noreferrer"&gt;Cline Memory Bank&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ghuntley.com/ralph/" rel="noopener noreferrer"&gt;Ralph by Geoff Huntley&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/VoltAgent/awesome-claude-code-subagents" rel="noopener noreferrer"&gt;100+ Subagent Examples&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;All the diagrams in this series were created with Claude Code, using the patterns it describes. We stayed out of the Dumbzone the whole way through. Mostly.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>softwareengineering</category>
      <category>development</category>
    </item>
    <item>
      <title>Escaping the Dumbzone, Part 3: Knowledge Management &amp; Configuration</title>
      <dc:creator>diggidydale</dc:creator>
      <pubDate>Fri, 30 Jan 2026 10:18:49 +0000</pubDate>
      <link>https://dev.to/diggidydale/escaping-the-dumbzone-part-3-knowledge-management-configuration-1n6p</link>
      <guid>https://dev.to/diggidydale/escaping-the-dumbzone-part-3-knowledge-management-configuration-1n6p</guid>
      <description>&lt;p&gt;&lt;em&gt;Part 3 of 4 in the "Escaping the Dumbzone" series&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;In &lt;a href="https://dev.to/diggidydale/escaping-the-dumbzone-part-2-subagents-divide-and-conquer-1p29"&gt;Part 2&lt;/a&gt;, we covered subagents—how to isolate exploration and keep your main context clean.&lt;/p&gt;

&lt;p&gt;But here's another problem: you learn something useful during a session, and it stays trapped in that session. Next time you start Claude? Amnesia. You're explaining the same things again.&lt;/p&gt;

&lt;p&gt;This part is about &lt;strong&gt;making knowledge stick&lt;/strong&gt; — across sessions, across tasks, across your whole team.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Crystallisation Problem
&lt;/h2&gt;

&lt;p&gt;Every session, you discover things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Oh, the auth module is in an unexpected place"&lt;/li&gt;
&lt;li&gt;"Tests need this specific env var set"&lt;/li&gt;
&lt;li&gt;"Don't touch that legacy file, it breaks everything"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These insights cost you context to discover. Then the session ends. Gone.&lt;/p&gt;

&lt;p&gt;Next session, you (or Claude) rediscover them. Burning context again. It's like Groundhog Day but with tokens.&lt;/p&gt;

&lt;p&gt;The fix is &lt;strong&gt;crystallisation&lt;/strong&gt; — taking ephemeral learnings and storing them somewhere durable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos4rxeaey4265dupm60b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fos4rxeaey4265dupm60b.png" alt="Knowledge crystallisation" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What to Call These Learnings?
&lt;/h2&gt;

&lt;p&gt;You might call them "thoughts," but there are better names:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Vibe&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Insights&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Wisdom extracted from exploration&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Learnings&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Knowledge gained through work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory Crystals&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Compressed, durable knowledge structures&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Distillations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Essence from verbose exploration&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I'd go with &lt;strong&gt;"Insights"&lt;/strong&gt; or &lt;strong&gt;"Learnings"&lt;/strong&gt; — clear, unpretentious, and they convey that this is extracted knowledge, not raw data.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Memory Bank Pattern
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/cline/cline" rel="noopener noreferrer"&gt;Cline&lt;/a&gt; popularised a structured approach with dedicated files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;memory-bank/
├── projectbrief.md      # What is this project?
├── productContext.md    # Business/user perspective
├── systemPatterns.md    # Architecture decisions
├── techContext.md       # Dev environment &amp;amp; stack
├── activeContext.md     # Current focus area
└── progress.md          # Status tracking
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82pgb11bgn92rsbeocq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82pgb11bgn92rsbeocq4.png" alt="Memory Bank Structure" width="800" height="690"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workflow is dead simple:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start session&lt;/strong&gt; → Read memory bank files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do work&lt;/strong&gt; → Context accumulates as normal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End session&lt;/strong&gt; → Update relevant memory bank files&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your AI starts each session with project knowledge instead of a blank slate. The upfront token cost is small (a few hundred tokens) compared to re-discovering everything.&lt;/p&gt;




&lt;h2&gt;
  
  
  CLAUDE.md: Your Project's Brain
&lt;/h2&gt;

&lt;p&gt;Claude Code has a built-in mechanism for this: &lt;code&gt;CLAUDE.md&lt;/code&gt;. It's read automatically at session start and treated as high-priority instructions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Example
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Project: E-Commerce Platform&lt;/span&gt;

&lt;span class="gu"&gt;## Stack&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Next.js 14 with App Router
&lt;span class="p"&gt;-&lt;/span&gt; PostgreSQL via Prisma
&lt;span class="p"&gt;-&lt;/span&gt; Redis for sessions

&lt;span class="gu"&gt;## Commands&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="sb"&gt;`npm run dev`&lt;/span&gt; — Dev server (port 3000)
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="sb"&gt;`npm test`&lt;/span&gt; — Jest tests
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="sb"&gt;`npm run db:migrate`&lt;/span&gt; — Run Prisma migrations

&lt;span class="gu"&gt;## Gotchas&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Auth uses localStorage fallback for Safari (ITP issues)
&lt;span class="p"&gt;-&lt;/span&gt; Don't modify /legacy — it's load-bearing spaghetti
&lt;span class="p"&gt;-&lt;/span&gt; Tests require DATABASE_URL env var

&lt;span class="gu"&gt;## Current Focus&lt;/span&gt;
Migrating from JWT to session-based auth.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why This Works
&lt;/h3&gt;

&lt;p&gt;Here's the key insight from HumanLayer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Claude will ignore the contents of your CLAUDE.md if it decides that it is not relevant to its current task."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This sounds bad but it's actually good. Anthropic intentionally made Claude deprioritise irrelevant instructions. It means you can include project-wide context without bloating every single task.&lt;/p&gt;

&lt;p&gt;But it also means: &lt;strong&gt;make your CLAUDE.md universally relevant&lt;/strong&gt;, not stuffed with edge cases.&lt;/p&gt;




&lt;h2&gt;
  
  
  The 150-200 Rule
&lt;/h2&gt;

&lt;p&gt;HumanLayer's research uncovered a crucial constraint: &lt;strong&gt;LLMs can reliably follow about 150-200 instructions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Claude Code's built-in system prompt already uses ~50 of those. That leaves you with maybe 100-150 before reliability drops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep CLAUDE.md under 300 lines.&lt;/strong&gt; HumanLayer's is under 60.&lt;/p&gt;

&lt;p&gt;Quality over quantity. Every line should earn its place.&lt;/p&gt;




&lt;h2&gt;
  
  
  CLAUDE.md Anti-Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Don't Use Claude as a Linter
&lt;/h3&gt;

&lt;p&gt;This is tempting but wrong:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Code Style (DON'T DO THIS)&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Use 2-space indentation
&lt;span class="p"&gt;-&lt;/span&gt; Maximum line length 80 characters
&lt;span class="p"&gt;-&lt;/span&gt; Always use semicolons
&lt;span class="p"&gt;-&lt;/span&gt; Prefer const over let
&lt;span class="p"&gt;-&lt;/span&gt; Use arrow functions for callbacks
&lt;span class="p"&gt;-&lt;/span&gt; ... (50 more rules)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This bloats your context and degrades performance. Claude isn't a linter. It's an AI.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Never send an LLM to do a linter's job."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Use actual linters (Biome, ESLint) and run them through Claude Code hooks. The linter enforces style; Claude focuses on logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't Auto-Generate
&lt;/h3&gt;

&lt;p&gt;It's tempting to run &lt;code&gt;/init&lt;/code&gt; and call it done. Don't.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"CLAUDE.md is one of the highest leverage points of the harness."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Manually craft every line. You know your project better than any auto-generator. The few minutes spent writing a good CLAUDE.md pays dividends across hundreds of sessions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't List Every Possible Command
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Commands (DON'T DO THIS)&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; npm run dev
&lt;span class="p"&gt;-&lt;/span&gt; npm run build
&lt;span class="p"&gt;-&lt;/span&gt; npm run test
&lt;span class="p"&gt;-&lt;/span&gt; npm run test:watch
&lt;span class="p"&gt;-&lt;/span&gt; npm run test:coverage
&lt;span class="p"&gt;-&lt;/span&gt; npm run lint
&lt;span class="p"&gt;-&lt;/span&gt; npm run lint:fix
&lt;span class="p"&gt;-&lt;/span&gt; npm run format
&lt;span class="p"&gt;-&lt;/span&gt; npm run typecheck
&lt;span class="p"&gt;-&lt;/span&gt; ... (20 more)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude doesn't need a reference manual. It can read &lt;code&gt;package.json&lt;/code&gt;. Include the non-obvious stuff, skip the obvious.&lt;/p&gt;




&lt;h2&gt;
  
  
  Progressive Disclosure
&lt;/h2&gt;

&lt;p&gt;Instead of cramming everything into CLAUDE.md, use &lt;strong&gt;separate files loaded on demand&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docs/
├── building.md         # How to build &amp;amp; deploy
├── architecture.md     # System design
├── conventions.md      # Code patterns we use
└── testing.md          # Test strategy &amp;amp; setup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in CLAUDE.md:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Documentation&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Read &lt;span class="sb"&gt;`docs/architecture.md`&lt;/span&gt; when working on system design
&lt;span class="p"&gt;-&lt;/span&gt; Read &lt;span class="sb"&gt;`docs/testing.md`&lt;/span&gt; before writing tests
&lt;span class="p"&gt;-&lt;/span&gt; Read &lt;span class="sb"&gt;`docs/conventions.md`&lt;/span&gt; for code review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gagjt96zeftkamywxjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gagjt96zeftkamywxjl.png" alt="Progressive Disclosure" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Context is loaded when needed, not upfront. Your baseline stays lean.&lt;/p&gt;




&lt;h2&gt;
  
  
  Session Hygiene
&lt;/h2&gt;

&lt;p&gt;Even with great knowledge management, sessions get bloated. Here's how to stay clean.&lt;/p&gt;

&lt;h3&gt;
  
  
  The /compact Command
&lt;/h3&gt;

&lt;p&gt;When context is getting full, &lt;code&gt;/compact&lt;/code&gt; compresses your conversation. It keeps important stuff, drops the noise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timing matters&lt;/strong&gt;: Compact at 70%, not 90%.&lt;/p&gt;

&lt;p&gt;Why? You need room to finish your current task. If you compact at 90% and the task needs another 15% of context, you're stuck.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhymao3sushxn02e1i4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhymao3sushxn02e1i4w.png" alt="compaction-timing" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The /clear Command
&lt;/h3&gt;

&lt;p&gt;Switching tasks? Use &lt;code&gt;/clear&lt;/code&gt; to reset context within your session.&lt;/p&gt;

&lt;p&gt;Less disruptive than starting a new session. Good for "I'm done with feature X, now working on feature Y."&lt;/p&gt;

&lt;h3&gt;
  
  
  Just Start Fresh
&lt;/h3&gt;

&lt;p&gt;Real talk: if you've been debugging something for an hour and want to switch to documentation, &lt;strong&gt;open a new chat&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It feels wasteful. It's not. Each fresh session:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero context rot&lt;/li&gt;
&lt;li&gt;No "lost in the middle" issues&lt;/li&gt;
&lt;li&gt;No cross-contamination from unrelated tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best context management is sometimes no context at all.&lt;/p&gt;




&lt;h2&gt;
  
  
  Iterative Refinement
&lt;/h2&gt;

&lt;p&gt;Your CLAUDE.md should evolve. Here's the loop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add an instruction&lt;/li&gt;
&lt;li&gt;Give Claude a task that relies on it&lt;/li&gt;
&lt;li&gt;Watch what happens&lt;/li&gt;
&lt;li&gt;Refine if it didn't work&lt;/li&gt;
&lt;li&gt;Commit so teammates benefit&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Claude Code has a shortcut: press &lt;code&gt;#&lt;/code&gt; during a session to add instructions that get incorporated into CLAUDE.md automatically. Use it when you discover something worth remembering.&lt;/p&gt;




&lt;h2&gt;
  
  
  Hierarchical Summarisation
&lt;/h2&gt;

&lt;p&gt;As sessions progress, use layered approaches:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Age&lt;/th&gt;
&lt;th&gt;Treatment&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Last 10-15 messages&lt;/td&gt;
&lt;td&gt;Keep verbatim&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Earlier in session&lt;/td&gt;
&lt;td&gt;Compress to summaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stable project facts&lt;/td&gt;
&lt;td&gt;Move to CLAUDE.md&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;One-time learnings&lt;/td&gt;
&lt;td&gt;Memory bank files&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Research suggests: &lt;strong&gt;prefer raw &amp;gt; compaction &amp;gt; summarisation&lt;/strong&gt;. Each step loses fidelity. Only summarise when you must.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Crystallise learnings&lt;/strong&gt; — Don't rediscover the same things every session&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory bank files&lt;/strong&gt; — Structured project knowledge that persists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CLAUDE.md is high-leverage&lt;/strong&gt; — Keep it under 300 lines, manually crafted&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Don't use Claude as a linter&lt;/strong&gt; — Real linters via hooks, Claude for logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Progressive disclosure&lt;/strong&gt; — Load detailed context only when needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Session hygiene&lt;/strong&gt; — Compact at 70%, clear between tasks, start fresh when needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Iterate your config&lt;/strong&gt; — CLAUDE.md should evolve with your project&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;We've covered staying out of the Dumbzone (subagents) and making knowledge persist (crystallisation, CLAUDE.md).&lt;/p&gt;

&lt;p&gt;But keep an eye out for part 4 where we will cover the more advanced patterns for serious context engineering.&lt;/p&gt;

&lt;p&gt;Backpressure control, the Ralph Loop for long-running autonomous tasks, and the 12 Factor Agents framework.&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md" rel="noopener noreferrer"&gt;Writing a good CLAUDE.md&lt;/a&gt; — HumanLayer's definitive guide&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cline.bot/blog/memory-bank-how-to-make-cline-an-ai-agent-that-never-forgets" rel="noopener noreferrer"&gt;Memory Bank: Making Cline Never Forget&lt;/a&gt; — The original pattern&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://code.claude.com/docs/en/memory" rel="noopener noreferrer"&gt;Manage Claude's memory&lt;/a&gt; — Official docs&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>development</category>
    </item>
    <item>
      <title>Escaping the Dumbzone, Part 2: Subagents — Divide and Conquer</title>
      <dc:creator>diggidydale</dc:creator>
      <pubDate>Wed, 21 Jan 2026 15:50:38 +0000</pubDate>
      <link>https://dev.to/diggidydale/escaping-the-dumbzone-part-2-subagents-divide-and-conquer-1p29</link>
      <guid>https://dev.to/diggidydale/escaping-the-dumbzone-part-2-subagents-divide-and-conquer-1p29</guid>
      <description>&lt;p&gt;&lt;em&gt;Part 2 of 4 in the "Escaping the Dumbzone" series&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;In &lt;a href="https://dev.to/diggidydale/escaping-the-dumbzone-part-1-why-your-ai-gets-stupider-the-more-you-talk-to-it-4d8k"&gt;Part 1&lt;/a&gt;, we covered why your AI gets dumber as context fills up. The "Lost in the Middle" problem, the MCP tool tax, the ~75k token smart zone.&lt;/p&gt;

&lt;p&gt;Now let's fix it.&lt;/p&gt;

&lt;p&gt;The most powerful technique for staying out of the Dumbzone is deceptively simple: &lt;strong&gt;don't put everything in one context&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Are Subagents?
&lt;/h2&gt;

&lt;p&gt;Subagents are specialised AI assistants that run in their own isolated context windows. Instead of one agent doing everything in one giant context, you spawn focused agents for specific tasks.&lt;/p&gt;

&lt;p&gt;When Claude needs to research something, it spawns a subagent. The subagent investigates in its own space, reading files, running searches, hitting dead ends. Then it returns just the answer. Not the whole investigation. Just the insight.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4n6wfutum868n3naii5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4n6wfutum868n3naii5.png" alt="Subagent Context Isolation" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The key insight: &lt;strong&gt;your main context receives 30 tokens of insight instead of 30,000 tokens of investigation&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Real Example
&lt;/h2&gt;

&lt;p&gt;You're debugging why authentication fails intermittently in production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Without Subagents
&lt;/h3&gt;

&lt;p&gt;Your main context accumulates everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Read AuthService.java (500 tokens)
- Read SessionRepository.java (400 tokens)
- Search for "token" (200 tokens)
- Read JwtTokenProvider.java (600 tokens)
- Hmm, that wasn't it
- Search for "expire" (150 tokens)
- Read RedisSessionStore.java (450 tokens)
- Dead end, try something else
- Read 5 more files...
- Finally found it
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Total added to your main context: ~30,000 tokens&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You found the bug, but you've burned a huge chunk of context on the journey. Now you have less room for actually fixing it.&lt;/p&gt;

&lt;h3&gt;
  
  
  With Subagents
&lt;/h3&gt;

&lt;p&gt;Your main context stays clean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You: "Investigate why auth fails intermittently in production"

[Subagent spawns, does all the investigation in its own context]

Subagent returns: "Typo in SessionRepository.java:156:
`getUsrSession()` instead of `getUserSession()`.
The fallback method silently returns null when the
primary lookup fails, causing intermittent auth
failures under load."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Total added to your main context: ~50 tokens&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Same answer. 600x less context consumed.&lt;/p&gt;




&lt;h2&gt;
  
  
  Built-in Subagents
&lt;/h2&gt;

&lt;p&gt;Claude Code comes with several subagents already:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Subagent&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;th&gt;Tools It Gets&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Explore&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fast codebase searching and analysis&lt;/td&gt;
&lt;td&gt;Read-only (Glob, Grep, Read)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Plan&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Research during planning mode&lt;/td&gt;
&lt;td&gt;Analysis tools only&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;General-purpose&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Complex multi-step tasks&lt;/td&gt;
&lt;td&gt;Everything&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;strong&gt;Explore&lt;/strong&gt; agent is your workhorse. Use it any time you need to understand something in the codebase. It can read files, search code, and analyse patterns, but it can't edit anything. Perfect for investigation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating Custom Subagents
&lt;/h2&gt;

&lt;p&gt;Want a subagent for your specific needs? Drop a markdown file in &lt;code&gt;.claude/agents/&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;security-reviewer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Reviews code for security vulnerabilities&lt;/span&gt;
&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Grep&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Glob&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

You're a security-focused code reviewer.
Find OWASP Top 10 vulnerabilities.

&lt;span class="gu"&gt;## What to Look For&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; SQL injection
&lt;span class="p"&gt;-&lt;/span&gt; XSS vulnerabilities
&lt;span class="p"&gt;-&lt;/span&gt; Auth/authz flaws
&lt;span class="p"&gt;-&lt;/span&gt; Sensitive data exposure

&lt;span class="gu"&gt;## Return Format&lt;/span&gt;
A structured report with:
&lt;span class="p"&gt;-&lt;/span&gt; Severity (Critical/High/Medium/Low)
&lt;span class="p"&gt;-&lt;/span&gt; Location (file:line)
&lt;span class="p"&gt;-&lt;/span&gt; What's wrong
&lt;span class="p"&gt;-&lt;/span&gt; How to fix it
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. Claude will now spawn this subagent when security review makes sense. The subagent runs in its own context, follows its own rules, and returns a concise report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihzhexyagumx93yt2uqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihzhexyagumx93yt2uqn.png" alt="Custom Subagent Definition" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Subagent Best Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Scope Tools Intentionally
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;tools&lt;/code&gt; field in the frontmatter controls what the subagent can do:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Read&lt;/span&gt;    &lt;span class="c1"&gt;# Can read files&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Grep&lt;/span&gt;    &lt;span class="c1"&gt;# Can search content&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Glob&lt;/span&gt;    &lt;span class="c1"&gt;# Can find files&lt;/span&gt;
  &lt;span class="c1"&gt;# No Edit, Write, or Bash — read-only&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read-heavy agents (research, review, analysis) shouldn't have write access. Implementation agents need Edit/Write/Bash.&lt;/p&gt;

&lt;p&gt;If you omit the tools field entirely, the subagent gets access to everything. Be intentional.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Define Clear Completion Criteria
&lt;/h3&gt;

&lt;p&gt;Each subagent should know exactly what "done" looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Definition of Done&lt;/span&gt;
Return when you have:
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Identified the root cause
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Found the specific file and line
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Confirmed with evidence (error message, stack trace, etc.)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Suggested a fix
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vague instructions = vague results = wasted context.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Use Parallel Execution
&lt;/h3&gt;

&lt;p&gt;Subagents can run simultaneously. Researching options for a decision?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Spawn three subagents in parallel:
- "Research Kia Ceed for fleet use"
- "Research Hyundai Kona for fleet use"
- "Research Toyota Yaris for fleet use"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three investigations, three separate contexts, results merge into your main context. Way faster than sequential research, and your main context only receives the summaries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26z1bcfqhrrnjhzajtog.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26z1bcfqhrrnjhzajtog.png" alt="Parallel Subagent Execution" width="800" height="578"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Watch the Cost
&lt;/h3&gt;

&lt;p&gt;Here's the tradeoff: each subagent is a separate API call. Chaining lots of subagents multiplies your token usage.&lt;/p&gt;

&lt;p&gt;For simple tasks, the overhead isn't worth it. For complex investigations? The context savings are massive.&lt;/p&gt;

&lt;p&gt;Rule of thumb: if the investigation would add more than ~1000 tokens to your main context, consider a subagent.&lt;/p&gt;




&lt;h2&gt;
  
  
  When to Use Subagents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Good fit:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Codebase exploration ("How does auth work here?")&lt;/li&gt;
&lt;li&gt;Bug investigation ("Why is this failing?")&lt;/li&gt;
&lt;li&gt;Research tasks ("What patterns does this codebase use?")&lt;/li&gt;
&lt;li&gt;Security/quality review&lt;/li&gt;
&lt;li&gt;Comparing options&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Skip it:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple, focused tasks&lt;/li&gt;
&lt;li&gt;When you already know where to look&lt;/li&gt;
&lt;li&gt;Quick one-file fixes&lt;/li&gt;
&lt;li&gt;When you need the full context for a decision&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Don't overcomplicate simple work. The overhead isn't worth it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Explore Subagent in Action
&lt;/h2&gt;

&lt;p&gt;Claude Code's built-in Explore agent is incredibly useful. Here's how to leverage it:&lt;/p&gt;

&lt;p&gt;Instead of:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Read AuthService.java, then read SessionRepository.java, then search for token validation..."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Use the Explore agent to understand how authentication
works in this codebase, particularly session management
and token validation."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Explore agent will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search for relevant files&lt;/li&gt;
&lt;li&gt;Read and analyse them&lt;/li&gt;
&lt;li&gt;Follow the code paths&lt;/li&gt;
&lt;li&gt;Return a coherent summary&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Your main context gets the summary. The exploration stays isolated.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Subagents = isolated context&lt;/strong&gt; — Investigation happens separately, only insights return&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;30 tokens vs 30,000&lt;/strong&gt; — That's the difference between insight and investigation log&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scope tools intentionally&lt;/strong&gt; — Read-only for research, full access for implementation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define done clearly&lt;/strong&gt; — Vague instructions waste context on both sides&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parallelise when possible&lt;/strong&gt; — Multiple subagents can run simultaneously&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mind the cost&lt;/strong&gt; — Each subagent is an API call; use for complex tasks&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://code.claude.com/docs/en/sub-agents" rel="noopener noreferrer"&gt;Create custom subagents&lt;/a&gt; — Official Claude Code docs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/VoltAgent/awesome-claude-code-subagents" rel="noopener noreferrer"&gt;100+ Subagent Examples&lt;/a&gt; — VoltAgent collection&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://zachwills.net/how-to-use-claude-code-subagents-to-parallelize-development/" rel="noopener noreferrer"&gt;How to Use Claude Code Subagents&lt;/a&gt; — Zach Wills' guide&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>llm</category>
    </item>
    <item>
      <title>5 Habits of Highly Effective Agentic Engineers</title>
      <dc:creator>diggidydale</dc:creator>
      <pubDate>Mon, 19 Jan 2026 11:38:24 +0000</pubDate>
      <link>https://dev.to/diggidydale/5-habits-of-highly-effective-agentic-engineers-1lo0</link>
      <guid>https://dev.to/diggidydale/5-habits-of-highly-effective-agentic-engineers-1lo0</guid>
      <description>&lt;h2&gt;
  
  
  (Or: Everything Old is New Again)
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This post was inspired by &lt;a href="https://www.linkedin.com/posts/cole-medin-727752184_most-developers-using-ai-coding-assistants-activity-7414834730149376000-lecD/" rel="noopener noreferrer"&gt;Cole Medin's LinkedIn post&lt;/a&gt; outlining five meta-skills used by the top 1% Agentic Engineers. What struck me was how familiar these patterns felt—not because they're new, but because they're old. Here's my take on why.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9iwv6lml309cq3v8lxt5.png" alt="habits of an agentic engineer" width="800" height="778"&gt;
&lt;/h2&gt;

&lt;p&gt;There's a lot of hype around "agentic engineering" and "prompt engineering" right now. But here's the thing: &lt;strong&gt;none of this is actually new.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The developers getting the best results from AI coding tools aren't discovering revolutionary techniques. They're applying battle-tested software engineering principles to a new context. If you've been writing software for any length of time, you already know these patterns, you just need to recognise them in a different light.&lt;/p&gt;

&lt;p&gt;Let's break down five habits that make AI coding effective, and trace each one back to the fundamentals we've always known.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. PRD-First Development
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The agentic habit:&lt;/strong&gt; Document before you code. Your Product Requirements Document becomes the source of truth for every AI conversation.&lt;/p&gt;

&lt;h3&gt;
  
  
  The traditional parallel: Requirements Engineering
&lt;/h3&gt;

&lt;p&gt;This is just &lt;strong&gt;requirements documentation&lt;/strong&gt;, the same thing we've been doing (or should have been doing) since the Waterfall days. The difference is who's reading it.&lt;/p&gt;

&lt;p&gt;When you write a PRD for an AI assistant, you're doing exactly what you'd do when onboarding a new team member:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explain the project context&lt;/li&gt;
&lt;li&gt;Define acceptance criteria&lt;/li&gt;
&lt;li&gt;Document constraints and decisions&lt;/li&gt;
&lt;li&gt;Provide enough detail that they can work autonomously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The principles are identical to what IEEE 830 taught us about Software Requirements Specifications, or what agile teams do with user stories and acceptance criteria. We've always known that &lt;strong&gt;ambiguous requirements lead to wrong implementations&lt;/strong&gt;. That's true whether the implementer is a junior developer, an offshore team, or an AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  What changes with AI
&lt;/h3&gt;

&lt;p&gt;The AI won't ask clarifying questions unprompted. It won't push back on unrealistic timelines. It will just... start building. So your requirements need to be more explicit than you might provide to a human who'd flag obvious gaps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional practice:&lt;/strong&gt; "A good spec is one where an engineer can implement it without coming back with questions."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic translation:&lt;/strong&gt; The same standard, just enforced more ruthlessly.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. Modular Rules Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The agentic habit:&lt;/strong&gt; Split your coding rules by concern. Load only what's relevant to the current task.&lt;/p&gt;

&lt;h3&gt;
  
  
  The traditional parallel: Separation of Concerns
&lt;/h3&gt;

&lt;p&gt;This is the &lt;strong&gt;Single Responsibility Principle&lt;/strong&gt; applied to documentation. It's &lt;strong&gt;modular design&lt;/strong&gt;. It's the same reason we don't put all our code in one file.&lt;/p&gt;

&lt;p&gt;We've known since the 1970s that coupling is the enemy of maintainability. David Parnas wrote about information hiding in 1972. The SOLID principles formalised it. Every architecture guide ever written says "separate concerns."&lt;/p&gt;

&lt;p&gt;When you dump 5,000 lines of coding standards into an AI context window for a simple CSS fix, you're creating the documentation equivalent of a God Object. Everything is coupled. Nothing is focused. The signal-to-noise ratio plummets.&lt;/p&gt;

&lt;h3&gt;
  
  
  What changes with AI
&lt;/h3&gt;

&lt;p&gt;The "cost" of loading irrelevant context isn't just cognitive load, it's literal token usage and degraded output quality. The AI will try to apply rules that don't matter, or get confused by contradictory guidance across different domains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional practice:&lt;/strong&gt; "Load what you need, when you need it" (dependency injection, lazy loading, microservices)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic translation:&lt;/strong&gt; Same principle, applied to context and instructions.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. Automate Repetitive Tasks
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The agentic habit:&lt;/strong&gt; If you do something more than twice, make it a command.&lt;/p&gt;

&lt;h3&gt;
  
  
  The traditional parallel: DRY and Scripting
&lt;/h3&gt;

&lt;p&gt;This is literally just &lt;strong&gt;Don't Repeat Yourself&lt;/strong&gt;, the principle Andy Hunt and Dave Thomas gave us in The Pragmatic Programmer back in 1999.&lt;/p&gt;

&lt;p&gt;We've always automated repetitive tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shell scripts for common operations&lt;/li&gt;
&lt;li&gt;Makefiles and build systems&lt;/li&gt;
&lt;li&gt;CI/CD pipelines&lt;/li&gt;
&lt;li&gt;Git hooks&lt;/li&gt;
&lt;li&gt;IDE snippets and templates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The only difference is the automation medium. Instead of writing a bash script, you're writing a reusable prompt. Instead of a CI pipeline, you're creating a slash command. The principle is identical: &lt;strong&gt;encode the process once, execute it many times.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What changes with AI
&lt;/h3&gt;

&lt;p&gt;The barrier to automation is lower. You don't need to know bash scripting to create a &lt;code&gt;/commit&lt;/code&gt; command that follows your team's conventions. Natural language becomes the automation layer.&lt;/p&gt;

&lt;p&gt;But the discipline is the same: notice repetition, extract it, name it, reuse it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional practice:&lt;/strong&gt; "Three strikes and you refactor" (Martin Fowler's Rule of Three)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic translation:&lt;/strong&gt; Three prompts and you make it a command.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. The Context Reset
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The agentic habit:&lt;/strong&gt; Planning and execution are separate conversations. Clear your context and start fresh.&lt;/p&gt;

&lt;h3&gt;
  
  
  The traditional parallel: Stateless Design and Clean Builds
&lt;/h3&gt;

&lt;p&gt;This is &lt;strong&gt;stateless architecture&lt;/strong&gt; applied to conversations. It's the same reason we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do clean builds instead of incremental ones when things get weird&lt;/li&gt;
&lt;li&gt;Restart services instead of debugging corrupted state&lt;/li&gt;
&lt;li&gt;Prefer stateless microservices over stateful monoliths&lt;/li&gt;
&lt;li&gt;Run tests in isolated environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We've always known that accumulated state causes bugs. Every senior engineer has a story about a "works on my machine" issue that was solved by wiping derived data and rebuilding from scratch. The phrase "have you tried turning it off and on again" exists because &lt;strong&gt;resetting to a known good state is a legitimate debugging technique.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI conversations accumulate state too. Early misunderstandings persist. Bad assumptions compound. The context window becomes polluted with failed approaches and outdated decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  What changes with AI
&lt;/h3&gt;

&lt;p&gt;The fix is the same: checkpoint your progress (write it to a document), clear the state (start a new conversation), and continue from the checkpoint. It's exactly like committing your code before a risky refactor so you can reset if things go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional practice:&lt;/strong&gt; "When in doubt, rebuild from scratch" / "Prefer stateless over stateful"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic translation:&lt;/strong&gt; Capture state in documents, not in conversation history.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. System Evolution Mindset
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The agentic habit:&lt;/strong&gt; Every bug is an opportunity to improve your AI coding system.&lt;/p&gt;

&lt;h3&gt;
  
  
  The traditional parallel: Continuous Improvement and Blameless Post-mortems
&lt;/h3&gt;

&lt;p&gt;This is just &lt;strong&gt;kaizen&lt;/strong&gt;, the continuous improvement mindset that Toyota made famous and that agile adopted as retrospectives.&lt;/p&gt;

&lt;p&gt;When production breaks, good teams don't just fix the immediate issue. They ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why did this happen?&lt;/li&gt;
&lt;li&gt;How do we prevent this class of problem?&lt;/li&gt;
&lt;li&gt;What systemic change would catch this earlier?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the same thinking behind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blameless post-mortems&lt;/li&gt;
&lt;li&gt;"Five Whys" analysis&lt;/li&gt;
&lt;li&gt;Adding regression tests after bugs&lt;/li&gt;
&lt;li&gt;Updating runbooks after incidents&lt;/li&gt;
&lt;li&gt;The DevOps feedback loop&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The difference with AI coding is what you're improving. Instead of adding a test or updating a runbook, you might add a rule to your &lt;code&gt;.agents/&lt;/code&gt; directory or update your PRD template.&lt;/p&gt;

&lt;h3&gt;
  
  
  What changes with AI
&lt;/h3&gt;

&lt;p&gt;The feedback loop is tighter. You can observe AI mistakes in real-time and immediately encode the correction. It's like having continuous deployment for your development process itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional practice:&lt;/strong&gt; "Every incident is a learning opportunity" / "Fix the system, not the symptom"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic translation:&lt;/strong&gt; Every AI mistake becomes a rule that prevents future mistakes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;If you've been doing software engineering for a while, you already have the mental models you need. Agentic engineering isn't a new discipline, it's the application of existing disciplines to a new tool.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Agentic Habit&lt;/th&gt;
&lt;th&gt;Traditional Principle&lt;/th&gt;
&lt;th&gt;Classic Source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;PRD-First Development&lt;/td&gt;
&lt;td&gt;Requirements Engineering&lt;/td&gt;
&lt;td&gt;IEEE 830, User Stories&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Modular Rules&lt;/td&gt;
&lt;td&gt;Separation of Concerns&lt;/td&gt;
&lt;td&gt;Parnas (1972), SOLID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Automate Tasks&lt;/td&gt;
&lt;td&gt;DRY, Scripting&lt;/td&gt;
&lt;td&gt;Pragmatic Programmer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context Reset&lt;/td&gt;
&lt;td&gt;Stateless Design&lt;/td&gt;
&lt;td&gt;12-Factor App, Clean Builds&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System Evolution&lt;/td&gt;
&lt;td&gt;Continuous Improvement&lt;/td&gt;
&lt;td&gt;Toyota Way, DevOps&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The engineers struggling with AI tools are often those who've forgotten (or never learned) these fundamentals. The ones thriving are those who recognise that &lt;strong&gt;the principles that made us good engineers still apply—they just have a new surface area.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What to do next
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audit your current approach.&lt;/strong&gt; Which of these principles are you already applying? Which have you let slip?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pick one to improve.&lt;/strong&gt; Don't try to transform everything at once. Start with the habit that maps to your existing strengths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Share what works.&lt;/strong&gt; As a consultancy, we should be codifying these patterns across projects. What rules, templates, or commands have you created that others could use?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;The best "agentic engineers" aren't learning new tricks, they're remembering old ones. Good engineering is good engineering, regardless of whether your pair programmer is human or artificial.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>softwareengineering</category>
      <category>development</category>
    </item>
    <item>
      <title>Escaping the Dumbzone, Part 1: Why Your AI Gets Stupider the More You Talk to It</title>
      <dc:creator>diggidydale</dc:creator>
      <pubDate>Fri, 16 Jan 2026 10:06:33 +0000</pubDate>
      <link>https://dev.to/diggidydale/escaping-the-dumbzone-part-1-why-your-ai-gets-stupider-the-more-you-talk-to-it-4d8k</link>
      <guid>https://dev.to/diggidydale/escaping-the-dumbzone-part-1-why-your-ai-gets-stupider-the-more-you-talk-to-it-4d8k</guid>
      <description>&lt;p&gt;&lt;em&gt;Part 1 of 4 in the "Escaping the Dumbzone" series&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Look, we've all been there. You're an hour into a coding session with Claude, and suddenly it starts doing weird stuff. Forgetting things you told it five minutes ago. Ignoring your instructions. Making suggestions that feel... off.&lt;/p&gt;

&lt;p&gt;You haven't done anything wrong. Your AI just wandered into &lt;strong&gt;the Dumbzone&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Even Is the Dumbzone?
&lt;/h2&gt;

&lt;p&gt;Here's the thing nobody tells you: &lt;strong&gt;giving your AI more context often makes it dumber&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Not a little dumber. Research shows accuracy can tank from 87% to 54% just from context overload. That's not a typo—more information literally made the model perform worse.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4d3qbqk6ylhnr3dvxdcp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4d3qbqk6ylhnr3dvxdcp.png" alt="dumbzone-curve" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Teams who've figured this out follow a simple rule: once you hit 40% context usage, expect weird behaviour. HumanLayer takes it further—they say stay under ~75k tokens for Claude to remain in the "smart zone."&lt;/p&gt;

&lt;p&gt;Beyond that? You're in the Dumbzone. And no amount of clever prompting will save you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Does This Happen?
&lt;/h2&gt;

&lt;p&gt;Two main reasons, both backed by research.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Lost in the Middle
&lt;/h3&gt;

&lt;p&gt;Stanford researchers found something wild: LLMs have a U-shaped attention curve. They pay attention to the beginning of context. They pay attention to the end. But the middle? That's the "I'm not really listening" zone.&lt;/p&gt;

&lt;p&gt;Performance degrades by over 30% when critical information sits in the middle versus at the start or end. Thirty percent. Just from position.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae48t5vicyzkivy0kmyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fae48t5vicyzkivy0kmyg.png" alt="The U-Shaped Attention Curve" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's why this matters for coding: every file you read, every tool output, every conversation turn—it all piles up in the middle. Your actual instructions get pushed into the zone where they're most likely to be ignored.&lt;/p&gt;

&lt;p&gt;You're not imagining that Claude forgot what you said. It literally can't see it as well anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The MCP Tool Tax
&lt;/h3&gt;

&lt;p&gt;This one's sneaky. Connect five MCP servers and you've burned &lt;strong&gt;50,000 tokens before typing anything&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Each MCP connection loads dozens of tool definitions. Five servers × dozens of tools = a massive chunk of your context window consumed by stuff you might not even use this session.&lt;/p&gt;

&lt;p&gt;That's 40% of a typical context window. Gone. On tool definitions.&lt;/p&gt;

&lt;p&gt;You haven't started working yet. You're already approaching the Dumbzone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo89wi587xrfwcbi1sgha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo89wi587xrfwcbi1sgha.png" alt="Context Budget Breakdown" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Smart Zone
&lt;/h2&gt;

&lt;p&gt;HumanLayer coined this term, and it's useful: there's a &lt;strong&gt;~75k token "smart zone"&lt;/strong&gt; where Claude performs well. Beyond that, things get weird.&lt;/p&gt;

&lt;p&gt;But it's not just about total tokens. It's about what those tokens are.&lt;/p&gt;

&lt;p&gt;Every line of test output like &lt;code&gt;PASS src/utils/helper.test.ts&lt;/code&gt; is waste. It's consuming tokens for information that could be conveyed in a single character: ✓&lt;/p&gt;

&lt;p&gt;Every file you read "just in case" is context you might not need.&lt;/p&gt;

&lt;p&gt;Every verbose error message is pushing your actual instructions further into the forgotten middle.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Deterministic is better than non-deterministic. If you already know what matters, don't leave it to a model to churn through 1000s of junk tokens to decide."&lt;br&gt;
— HumanLayer&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Symptoms
&lt;/h2&gt;

&lt;p&gt;How do you know you're in the Dumbzone? Watch for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Instruction amnesia&lt;/strong&gt;: Claude ignores rules it followed perfectly 10 minutes ago&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context bleed&lt;/strong&gt;: It pulls in irrelevant details from earlier conversation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Weird outputs&lt;/strong&gt;: Responses that feel off, unfocused, or oddly generic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repetition&lt;/strong&gt;: Suggesting things you already tried or discussed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Confidence without competence&lt;/strong&gt;: Sounding sure while being wrong&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're seeing these, check your context meter. You're probably deeper than you think.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The Dumbzone is real, but it's not inevitable. Over the next three parts, we'll cover:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 2: Subagents&lt;/strong&gt; — The most powerful technique for staying out of the Dumbzone. Isolate your exploration, get insights instead of investigation logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 3: Knowledge &amp;amp; Configuration&lt;/strong&gt; — Crystallising learnings, writing effective CLAUDE.md files, and session hygiene that actually works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Part 4: Advanced Patterns&lt;/strong&gt; — Backpressure control, the Ralph Loop for long-running tasks, and the 12 Factor Agents framework.&lt;/p&gt;

&lt;p&gt;The goal isn't to avoid using context. It's to use it intentionally.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;More context ≠ better results&lt;/strong&gt; — Performance degrades sharply after 40% usage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The middle gets ignored&lt;/strong&gt; — LLMs have U-shaped attention; beginning and end matter most&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool definitions are expensive&lt;/strong&gt; — MCP servers can consume 40%+ before you start&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stay in the smart zone&lt;/strong&gt; — Aim for under 75k tokens of actual useful content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Watch for symptoms&lt;/strong&gt; — Instruction amnesia, weird outputs, and context bleed mean you're too deep&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://arxiv.org/abs/2307.03172" rel="noopener noreferrer"&gt;Lost in the Middle: How Language Models Use Long Contexts&lt;/a&gt; — The Stanford research paper&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md" rel="noopener noreferrer"&gt;Writing a good CLAUDE.md&lt;/a&gt; — HumanLayer's guide&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.getmaxim.ai/articles/context-window-management-strategies-for-long-context-ai-agents-and-chatbots/" rel="noopener noreferrer"&gt;Context Window Management Strategies&lt;/a&gt; — Maxim AI&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>development</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
