<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Simon Wang</title>
    <description>The latest articles on DEV Community by Simon Wang (@thesystemistsimon).</description>
    <link>https://dev.to/thesystemistsimon</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thesystemistsimon"/>
    <language>en</language>
    <item>
      <title>I Made AI Study My Codebase Before Writing a Single Line</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Sat, 14 Feb 2026 22:47:39 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/i-made-ai-study-my-codebase-before-writing-a-single-line-3cic</link>
      <guid>https://dev.to/thesystemistsimon/i-made-ai-study-my-codebase-before-writing-a-single-line-3cic</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Image Photo by Vitaly Gariev on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4 practices for building context that survives between sessions
&lt;/h2&gt;




&lt;p&gt;Adding "because" to corrections helps AI apply principles within a session. But sessions end. Tomorrow, AI starts fresh.&lt;/p&gt;

&lt;p&gt;What if AI already knew your project's patterns at the start of every session?&lt;/p&gt;

&lt;p&gt;That's what this article is about: building context that persists. &lt;/p&gt;

&lt;h2&gt;
  
  
  Bootstrap: Have AI Read Your Code First
&lt;/h2&gt;

&lt;p&gt;Before you write any instructions manually, let AI do the initial work.&lt;/p&gt;

&lt;p&gt;An example prompt I might use for illustration purpose:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Read through this codebase. What patterns do you see that I'd probably correct you on if you got them wrong? Focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom types we use instead of standard library types&lt;/li&gt;
&lt;li&gt;Architecture boundaries (what shouldn't call what)&lt;/li&gt;
&lt;li&gt;External service constraints (rate limits, costs, timeouts)&lt;/li&gt;
&lt;li&gt;Conventions that appear consistently across files&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Format each pattern as: "[Do X] because [Y]"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI scans your code and surfaces patterns. Not everything it finds will be right. But it gives you a starting point, faster than writing from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A note on AI-generated "because" statements:&lt;/strong&gt; The bootstrap prompt asks AI to hypothesize reasons for patterns it observes in your actual code files, not guess from generic training data. But these are still inferences, and some will be wrong. Treat the output as a first draft. When a "because" doesn't match reality, correct it. The exercise of reviewing and fixing these explanations often surfaces conventions you hadn't explicitly articulated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where this works best:&lt;/strong&gt; Private codebases with conventions AI hasn't seen in training. For popular open-source projects, AI might already "know" the patterns from training data, making the bootstrap less revealing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set expectations:&lt;/strong&gt; This isn't "set and forget." The instruction file is a living document. New patterns emerge, old ones become obsolete. Budget 10 minutes monthly to review and prune. The payoff is fewer repeated corrections, not zero corrections.&lt;/p&gt;

&lt;p&gt;What comes back might look like (examples are illustrative, not from any real codebase):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Use `process[Entity]Data` naming for transformation functions because the codebase follows this pattern consistently
- Keep domain logic in /src/domain because it's separated from infrastructure for testing
- Add retry with backoff on auth service calls because comments mention cold-start latency issues
- Use repository interfaces because the codebase follows dependency injection patterns
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Review this. Keep what's accurate. Discard what's wrong or outdated. Edit what's almost right.&lt;/p&gt;

&lt;p&gt;This is your initial instruction file. Save it somewhere your AI tool can access:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor/Claude Code:&lt;/strong&gt; Add to &lt;code&gt;.cursorrules&lt;/code&gt;, &lt;code&gt;CLAUDE.md&lt;/code&gt;, or project instructions (loads automatically each session)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT:&lt;/strong&gt; Save to Custom GPT instructions (loads automatically) or paste at session start (manual)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude Projects:&lt;/strong&gt; Add to project knowledge (loads automatically)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Other tools:&lt;/strong&gt; Keep in a file you can reference when starting new sessions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The trade-off:&lt;/strong&gt; A detailed instruction file eats context window every session. Start lean (key patterns only) and expand as you identify what actually reduces corrections. If it grows past ~1000 words, consider splitting by module.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Instruction File
&lt;/h2&gt;

&lt;p&gt;Every AI tool has somewhere to put persistent context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cursor:&lt;/strong&gt; Project rules, &lt;code&gt;.cursorrules&lt;/code&gt; file, or &lt;code&gt;AGENTS.md&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude:&lt;/strong&gt; Projects with custom instructions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot:&lt;/strong&gt; Custom instructions in settings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChatGPT:&lt;/strong&gt; Custom instructions or memory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The specific mechanism matters less than having ONE place where project context lives and loads automatically.&lt;/p&gt;

&lt;p&gt;What goes in this file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming Conventions&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Use `process[Entity]Data` for transformation functions because this is 
  the established pattern across all data processing modules
- Prefix internal API routes with `/internal/` because our gateway uses 
  this to block external access
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;External Service Constraints&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Add retry with exponential backoff on auth service calls because the 
  service has 2-3 second cold-start latency after idle periods
- Cache geocoding responses for 24 hours because the upstream API charges 
  $0.005 per call and has 200ms latency
- Set 5-second timeouts on inventory checks because the warehouse API 
  occasionally hangs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Architecture Boundaries&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Don't add repository calls in domain functions because this layer gets 
  reused in the offline-first mobile app where there's no database
- Keep the pricing calculator stateless because this service runs as a 
  Lambda and state doesn't persist between invocations
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Learned from Incidents&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Validate phone numbers with libphonenumber because we support international 
  formats and need carrier data for SMS routing
- Log the full request before calling the payment gateway because we've 
  lost debugging context when their API times out
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice: every pattern has "because." The reasoning is what makes these transferable, not just rules to follow blindly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Applies To
&lt;/h2&gt;

&lt;p&gt;This approach works with AI tools that support persistent instructions: Cursor rules, Claude Projects, Copilot custom instructions, and similar. You need somewhere to store context that loads automatically at session start.&lt;/p&gt;

&lt;p&gt;For pure autocomplete without instruction file support, the bootstrap and capture practices still help you think clearly about patterns, even if you can't feed them back to the tool directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Capturing New Patterns
&lt;/h2&gt;

&lt;p&gt;Your instruction file will grow over time. The source: corrections you make during work.&lt;/p&gt;

&lt;p&gt;If you've adopted the "because" habit from the companion article, you're already generating good candidates. The signal that something belongs in the instruction file: you've corrected the same thing twice.&lt;/p&gt;

&lt;p&gt;First correction: maybe a one-off. Second correction: it's a pattern. Save it.&lt;/p&gt;

&lt;p&gt;The workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You correct AI with "because" during a session&lt;/li&gt;
&lt;li&gt;The correction helps within that session&lt;/li&gt;
&lt;li&gt;If you make the same correction again (same session or different), copy it to the instruction file&lt;/li&gt;
&lt;li&gt;Now it loads automatically in future sessions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Don't try to anticipate everything. Let the file grow from actual corrections. What you actually correct is more valuable than what you think you might correct.&lt;/p&gt;

&lt;h2&gt;
  
  
  Per-Task Review: Ask Before Generating
&lt;/h2&gt;

&lt;p&gt;Bootstrap creates your initial file. Capturing grows it over time. But there's a third practice that catches misunderstandings before they become code.&lt;/p&gt;

&lt;p&gt;Before significant generation, ask AI what it found.&lt;/p&gt;

&lt;p&gt;The prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Before you generate this, tell me: what patterns in the relevant code would you follow? What constraints would you respect?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI shows its working. You review before it generates, not after.&lt;/p&gt;

&lt;p&gt;This catches:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Patterns AI noticed that you didn't intend to follow&lt;/li&gt;
&lt;li&gt;Constraints AI missed that you expected it to catch&lt;/li&gt;
&lt;li&gt;Conflicts between patterns that need resolution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;You ask AI to add a new payment method. Before it generates, you ask what patterns it would follow.&lt;/p&gt;

&lt;p&gt;AI responds: "I'd use the PaymentError wrapper, batch API calls, and follow the existing repository pattern in PaymentRepository."&lt;/p&gt;

&lt;p&gt;You notice: "Actually, this new provider doesn't have rate limits. Don't batch. And we're trying to move away from the repository pattern for new code, use the port/adapter pattern instead."&lt;/p&gt;

&lt;p&gt;You've prevented two wrong guesses before they became code to review and correct.&lt;/p&gt;

&lt;p&gt;This is deliberate discovery: actively surfacing AI's assumptions before acting on them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Maintenance
&lt;/h2&gt;

&lt;p&gt;The instruction file isn't precious. It evolves.&lt;/p&gt;

&lt;p&gt;I review mine roughly monthly. Not on a rigid schedule, just when I notice it's been a while or when corrections start repeating.&lt;/p&gt;

&lt;p&gt;What I look for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Outdated patterns:&lt;/strong&gt; We migrated off Stripe. Remove those constraints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conflicts:&lt;/strong&gt; Two patterns that contradict each other. Resolve or clarify scope.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bloat:&lt;/strong&gt; Is the file getting long enough that AI might ignore parts? Split or prioritize.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Missing context:&lt;/strong&gt; Patterns I've corrected repeatedly that aren't in the file yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal isn't a perfect document. It's a living reference that makes AI more useful over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example: My Instruction File Structure
&lt;/h2&gt;

&lt;p&gt;Here's a simplified version of how I organize mine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Project Context for AI&lt;/span&gt;

&lt;span class="gu"&gt;## About This Project&lt;/span&gt;
Brief description: what it does, main technologies, team conventions.

&lt;span class="gu"&gt;## Naming Conventions&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Use &lt;span class="sb"&gt;`process[Entity]Data`&lt;/span&gt; for transformation functions because...
&lt;span class="p"&gt;-&lt;/span&gt; Prefix internal routes with &lt;span class="sb"&gt;`/internal/`&lt;/span&gt; because...

&lt;span class="gu"&gt;## External Services&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Auth service: retry with backoff, cold-start latency
&lt;span class="p"&gt;-&lt;/span&gt; Geocoding API: cache 24h, $0.005/call
&lt;span class="p"&gt;-&lt;/span&gt; Warehouse API: 5s timeout, occasionally hangs

&lt;span class="gu"&gt;## Architecture Rules&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Domain layer: no infrastructure dependencies
&lt;span class="p"&gt;-&lt;/span&gt; Services: stateless (Lambda deployment)
&lt;span class="p"&gt;-&lt;/span&gt; New code: port/adapter pattern, not repository

&lt;span class="gu"&gt;## Domain Requirements&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Validate phones with libphonenumber (international + carrier)
&lt;span class="p"&gt;-&lt;/span&gt; Log before external calls (debugging context)

&lt;span class="gu"&gt;## Current Migrations (Temporary)&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Moving from repository pattern to port/adapter
&lt;span class="p"&gt;-&lt;/span&gt; Old code uses X, new code should use Y

&lt;span class="gu"&gt;## Still Figuring Out&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Best way to handle cross-service transactions
&lt;span class="p"&gt;-&lt;/span&gt; Whether to split this into multiple services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "Still Figuring Out" section is important. It tells AI where you don't have answers yet, so it doesn't confidently apply a pattern that you're still questioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  The System
&lt;/h2&gt;

&lt;p&gt;Four practices that build on each other:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Bootstrap:&lt;/strong&gt; AI reads codebase, creates initial instruction file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Capture:&lt;/strong&gt; Save "because" corrections that repeat&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review:&lt;/strong&gt; Ask AI what patterns it found before generating&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintain:&lt;/strong&gt; Monthly light-touch review&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You don't need all four to start. Bootstrap once, then capture as you work. Add per-task review for complex generations. Maintain when the file feels stale.&lt;/p&gt;

&lt;p&gt;This is my system. Yours will look different. The principles transfer: give AI the context it needs, and keep that context current.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>devtools</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Why Explaining Your AI Corrections Makes Them Stick</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Sat, 14 Feb 2026 22:44:12 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/why-explaining-your-ai-corrections-makes-them-stick-6mg</link>
      <guid>https://dev.to/thesystemistsimon/why-explaining-your-ai-corrections-makes-them-stick-6mg</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Image Photo by Scott Graham on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A habit that turned repetitive frustration into one-time fixes
&lt;/h2&gt;




&lt;p&gt;"Rename this to &lt;code&gt;processUserData&lt;/code&gt; &lt;em&gt;because we use &lt;code&gt;process[Entity]Data&lt;/code&gt; pattern for all transformation functions&lt;/em&gt;."&lt;/p&gt;

&lt;p&gt;That single word, "because," reduced how often I correct AI in a session.&lt;/p&gt;

&lt;p&gt;Not across sessions. AI doesn't remember those (I will discuss this in next article). But within a single session, adding "because" to my corrections changed how AI applied them.&lt;/p&gt;

&lt;p&gt;Here's why that matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Guesses
&lt;/h2&gt;

&lt;p&gt;AI tools are designed to act on available information rather than ask clarifying questions. That's what makes them fast and fluid. You don't get peppered with "Did you mean X or Y?" every time you ask for something.&lt;/p&gt;

&lt;p&gt;The tradeoff: AI guesses. Sometimes the guess is wrong, and you correct it.&lt;/p&gt;

&lt;p&gt;But here's what I missed at first: when you correct AI, it guesses again. It has to infer what you meant by the correction.&lt;/p&gt;

&lt;p&gt;"Rename this to &lt;code&gt;processUserData&lt;/code&gt;" could mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want this specific function renamed&lt;/li&gt;
&lt;li&gt;You follow a naming convention for all transformers&lt;/li&gt;
&lt;li&gt;You're matching an existing pattern in the codebase&lt;/li&gt;
&lt;li&gt;You just prefer it that way for no particular reason&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without knowing which, AI picks whichever seems most likely based on its training.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Default: Match to General Patterns
&lt;/h2&gt;

&lt;p&gt;LLM is fundamentally a pattern-matching system, but it matches against patterns from its training data, not your specific codebase. Without more context, AI defaults to what it's seen most often across millions of codebases.&lt;/p&gt;

&lt;p&gt;If you say "rename to &lt;code&gt;processUserData&lt;/code&gt;," AI applies the change where you pointed. When you ask it to create the next transformation function, it might name it &lt;code&gt;handleOrderInfo&lt;/code&gt; or &lt;code&gt;convertProductData&lt;/code&gt;, matching common patterns from its training, not your local convention.&lt;/p&gt;

&lt;p&gt;The problem: when your correction reflects a &lt;em&gt;local&lt;/em&gt; pattern (your naming conventions, your architectural constraints, your domain rules), AI has no way to distinguish it from a one-off preference. You may end up correcting the same underlying issue in different places, wondering why AI "didn't learn."&lt;/p&gt;

&lt;p&gt;It did learn. It learned you wanted that one function renamed. It didn't learn that you follow a convention, because you didn't tell it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Add "Because"
&lt;/h2&gt;

&lt;p&gt;"Because" changes AI's interpretation.&lt;/p&gt;

&lt;p&gt;"Rename this to &lt;code&gt;processUserData&lt;/code&gt;" is a one-off request. Apply here, with no signal about where else it applies.&lt;/p&gt;

&lt;p&gt;"Rename this to &lt;code&gt;processUserData&lt;/code&gt; &lt;em&gt;because we use &lt;code&gt;process[Entity]Data&lt;/code&gt; pattern for all transformation functions&lt;/em&gt;" is a convention with scope. Apply to all transformation functions in this module.&lt;/p&gt;

&lt;p&gt;The difference in the same session:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Correction&lt;/th&gt;
&lt;th&gt;AI's Interpretation&lt;/th&gt;
&lt;th&gt;What Happens Next&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;"Rename this to &lt;code&gt;processUserData&lt;/code&gt;"&lt;/td&gt;
&lt;td&gt;One-off rename&lt;/td&gt;
&lt;td&gt;Applies to that function. Next transformer gets named &lt;code&gt;handleOrderInfo&lt;/code&gt; or whatever feels natural.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"Rename to &lt;code&gt;processUserData&lt;/code&gt; because we use &lt;code&gt;process[Entity]Data&lt;/code&gt; for all transformers"&lt;/td&gt;
&lt;td&gt;Naming convention&lt;/td&gt;
&lt;td&gt;Applies to that function AND next transformer gets named &lt;code&gt;processOrderData&lt;/code&gt; consistently.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;"Because" tells AI: this isn't arbitrary. Here's when it applies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Same-Session Examples
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Without "because":&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You: "Add retry logic here."&lt;/p&gt;

&lt;p&gt;AI applies: Retry on this specific call.&lt;/p&gt;

&lt;p&gt;Later in the session, when AI writes another call to the same service, it might not include retry—especially if you've moved on to other code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With "because":&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You: "Add retry with exponential backoff &lt;em&gt;because the auth service has 2-3 second cold-start latency after being idle&lt;/em&gt;."&lt;/p&gt;

&lt;p&gt;AI now has context: This service is unreliable after idle periods. When it writes the next auth service call, it's much more likely to add retry logic because it understands the problem you're solving.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Without "because":&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You: "Validate this phone number."&lt;/p&gt;

&lt;p&gt;AI applies: Basic format validation on this input.&lt;/p&gt;

&lt;p&gt;Later, AI handles another phone input with minimal validation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With "because":&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You: "Validate with libphonenumber &lt;em&gt;because we support international formats and need carrier lookups for SMS routing&lt;/em&gt;."&lt;/p&gt;

&lt;p&gt;AI now has context: Phone handling in this system needs international support and carrier data. It's more likely to use libphonenumber consistently because it understands the domain requirements.&lt;/p&gt;




&lt;p&gt;The pattern: "because" gives AI local information it can't infer from its training. Your naming conventions, internal service constraints, and domain-specific requirements are unique to your system. Without "because," AI matches against general patterns from its training data, plus limited context in current session, not your codebase's patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Applies To
&lt;/h2&gt;

&lt;p&gt;This habit works with chat-based AI tools: ChatGPT, Claude, Copilot Chat, Cursor, and similar. You're having a conversation, and your corrections shape the rest of that conversation.&lt;/p&gt;

&lt;p&gt;For pure autocomplete tools (basic Copilot suggestions), you can't "correct with because" directly. There, code comments and instruction files matter more, that's a different workflow.&lt;/p&gt;

&lt;p&gt;Results also vary by model. AI tools with stronger reasoning may generalize better even without explicit "because." The habit helps across tools, but don't expect identical results everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Habit
&lt;/h2&gt;

&lt;p&gt;When you correct AI, add "because."&lt;/p&gt;

&lt;p&gt;That's the whole habit. No setup required. No files to create. No tools to configure.&lt;/p&gt;

&lt;p&gt;Next time AI does something you need to fix:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don't just say what to change&lt;/li&gt;
&lt;li&gt;Add why&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you don't know why, that's useful information too. Maybe it IS just a preference. But if there's a reason (architecture, security, external constraints, past incidents), say it.&lt;/p&gt;

&lt;p&gt;The test: does AI apply the principle appropriately elsewhere in the same session? If yes, the "because" worked. If not, maybe the reasoning needs to be clearer, or maybe AI's context window has moved on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Your next AI coding session:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Work normally until you need to correct something&lt;/li&gt;
&lt;li&gt;Add "because" with your reasoning&lt;/li&gt;
&lt;li&gt;Notice if AI applies the principle to other code in the session&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;No commitment. No system to adopt. Just one word.&lt;/p&gt;

&lt;p&gt;If it helps, keep doing it. If it doesn't, you've lost nothing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This habit helps within a session. If you want patterns to persist across sessions, that requires something more: an instruction file that loads at session start. I'll cover that in a follow-up article.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>coding</category>
    </item>
    <item>
      <title>Ralph Loop Is Innovative. I Wouldn’t Use It for Anything That Matters</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Thu, 29 Jan 2026 06:29:46 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/ralph-loop-is-innovative-i-wouldnt-use-it-for-anything-that-matters-4d99</link>
      <guid>https://dev.to/thesystemistsimon/ralph-loop-is-innovative-i-wouldnt-use-it-for-anything-that-matters-4d99</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Image Photo by Ivan N on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The outsourcing disaster pattern I’m seeing again, and three questions before you adopt it
&lt;/h2&gt;

&lt;p&gt;I've seen this pattern before. Ralph Loop shipped a $50,000 project for $297 in API costs. By every metric executives track (shipping speed, test passage, API costs), this is success. The story is everywhere. The innovation is genuine.&lt;/p&gt;

&lt;p&gt;But there's a mechanism I'm worried about, one that mirrors outsourcing's failures. &lt;a href="https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap"&gt;Parts 1&lt;/a&gt; and &lt;a href="https://dev.to/thesystemistsimon/comprehension-debt-how-to-stop-shipping-ai-code-you-dont-understand-2lpn"&gt;2&lt;/a&gt; of this series addressed code we understand inadequately. This part examines code generated without human presence, and why I wouldn't bet on it for production systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  The $297 Success Story
&lt;/h2&gt;

&lt;p&gt;The results are genuinely impressive. At a Y Combinator hackathon, teams used Ralph Loop to ship six production repositories overnight. Geoffrey Huntley, the pattern's creator, used it to build an entire programming language called CURSED through extended autonomous iteration. A project that would have cost $50,000 in developer time was completed for $297 in API calls.&lt;/p&gt;

&lt;p&gt;Ralph Loop is elegantly simple: a bash while loop that feeds your prompt to Claude Code, waits for the response, then feeds it back again. When the AI thinks it's done, a stop hook blocks the exit and forces another iteration. This continues until tests pass, a completion signal is detected, or you hit a maximum iteration count. There are now multiple Claude Code plugins that make adoption even easier.&lt;/p&gt;

&lt;p&gt;The genius is in context management. Each iteration starts fresh, re-reading specifications and current file state. Progress persists in your files and git history, not in the AI's context window. When context fills up, the next iteration gets a clean slate. The AI picks up where it left off because the work is in the codebase, not in conversation memory.&lt;/p&gt;

&lt;p&gt;This is genuine innovation. It solves real problems with context limitations and AI's tendency to stop at "good enough." Teams using it report extraordinary productivity gains.&lt;/p&gt;

&lt;p&gt;But there's something the success metrics don't capture.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Success Metrics Miss
&lt;/h2&gt;

&lt;p&gt;Ralph Loop optimizes for outcomes you can observe: tests pass, code ships, API costs stay low.&lt;/p&gt;

&lt;p&gt;It doesn't measure what happens when someone needs to understand the code.&lt;/p&gt;

&lt;p&gt;When AI iterates thirty times overnight to solve a problem, no human witnessed the journey. The code works. But the decisions made during iterations 1 through 29? Gone. Why approach A was tried before approach B? No explanation exists. LLMs don't produce records of their reasoning.&lt;/p&gt;

&lt;p&gt;The trade isn't speed for quality. The code quality might be fine. It compiles. It runs.&lt;/p&gt;

&lt;p&gt;The trade is speed for understanding.&lt;/p&gt;

&lt;p&gt;Accumulated traditional technical debt you can map and repay. But what happens when your organization accumulates debt it doesn't know how to map? More on that shortly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Outsourcing Parallel
&lt;/h2&gt;

&lt;p&gt;The outsourcing wave of 2000-2010 made similar promises: cut costs dramatically, get overnight development while you sleep, focus on core competencies while others handle the implementation.&lt;/p&gt;

&lt;p&gt;The success stories were everywhere. According to industry analyses and retrospectives, companies reported up to 60% cost reductions. Entire products shipped while executives slept.&lt;/p&gt;

&lt;p&gt;Then Phase 2 hit.&lt;/p&gt;

&lt;p&gt;Communication overhead exploded. Knowledge transfer failed. Bug fixes took significantly longer because the offshore team lacked context for how features fit together. Requirements got lost in translation. The hidden costs emerged: rework from misunderstandings, debugging without context, the slow realization that no one in-house understood critical systems anymore.&lt;/p&gt;

&lt;p&gt;The correction came through pain, not foresight. "Insourcing" became a trend. "Hybrid models" emerged as the practical middle ground: outsource well-defined, isolated tasks while keeping a core team who understands the system. Heavy investment in documentation and knowledge transfer became mandatory, not optional.&lt;/p&gt;

&lt;p&gt;Some companies never recovered. They had outsourced past the point of no return. By the time they recognized the problem, no internal team remained who understood their own systems. They faced costly rewrites of code that worked perfectly fine, because working code you can't maintain is a liability, not an asset.&lt;/p&gt;

&lt;p&gt;Ralph Loop is in Phase 1. The success stories dominate. If the pattern holds, Phase 2 is coming.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Outsourcing's Fix Won't Work Here
&lt;/h2&gt;

&lt;p&gt;Here's where the parallel breaks down.&lt;/p&gt;

&lt;p&gt;Outsourcing preserved knowledge somewhere: in the vendor's team. The knowledge wasn't in your building, but it existed. People understood the code. You could ask them questions. When vendor staff changed, there was at least a handoff process, however imperfect.&lt;/p&gt;

&lt;p&gt;Ralph Loop preserves knowledge nowhere.&lt;/p&gt;

&lt;p&gt;The AI has no persistent memory of its reasoning process. When the loop completes, no explanation of why the code works the way it does exists. Not in any human, not in any system. It's not that knowledge transferred poorly. It's that knowledge was never created.&lt;/p&gt;

&lt;p&gt;Call it outsourcing to amnesia.&lt;/p&gt;

&lt;p&gt;The corrections that worked for outsourcing can't transfer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retained core teams?&lt;/strong&gt; There's no human in the Ralph Loop to retain. The whole point is autonomous operation while developers do other work or sleep.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better documentation during development?&lt;/strong&gt; We'll examine this objection in detail, but AI can't document its reasoning in the way humans can. More on this shortly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selective outsourcing for defined tasks?&lt;/strong&gt; Possibly, but this limits Ralph Loop to contexts where understanding doesn't matter. The value proposition was autonomous work on real problems, not just throwaway scripts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hybrid models keeping critical work in-house?&lt;/strong&gt; There's no equivalent "in-house" for AI iterations. The human is either in the loop (defeating the autonomy) or not (accepting the knowledge gap).&lt;/p&gt;

&lt;p&gt;The fix that worked for outsourcing required humans preserving understanding somewhere in the system. Ralph Loop's architecture explicitly removes humans from the loop. The correction mechanism doesn't exist.&lt;/p&gt;




&lt;h2&gt;
  
  
  "Can't We Just Make AI Document?"
&lt;/h2&gt;

&lt;p&gt;A reasonable objection: add a documentation step to each iteration. AI explains what it did and why. Store it in a log file. Now we have a reasoning trail.&lt;/p&gt;

&lt;p&gt;This misunderstands both how LLMs work and what documentation actually does.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI explanations aren't AI reasoning.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you ask Claude "why did you write this code?", you get post-hoc rationalization. The AI generates a plausible-sounding explanation after the fact. It's not a trace of actual decision process. LLMs don't maintain a queryable record of their generation process. Research shows these explanations correlate weakly with actual generation causes. You're documenting a plausible story, not the real cause.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume defeats purpose.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Thirty iterations of explanations produces thousands of words of documentation per task. Who reads it? Not the developer who wanted autonomous overnight work. They're sleeping or doing other things. Not the code reviewer with thirty PRs in their queue. Not the future maintainer skimming for the one paragraph that matters. Documentation without readers is noise, not knowledge.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The most valuable knowledge can't be captured.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Could structured prompting force AI to generate decision trees at each iteration? Perhaps, but the output would still be post-hoc rationalization, not actual reasoning traces.&lt;/p&gt;

&lt;p&gt;The critical knowledge is what alternatives were considered, what trade-offs were evaluated, why approach X was chosen over Y and Z. LLMs don't track "considered alternatives" internally. When you prompt for them, you get made-up answers that sound plausible but weren't actually weighed during generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The efficiency trade-off kills the value proposition.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ralph Loop's value is autonomy: ship while you sleep, minimal human involvement. Meaningful documentation requires human verification. Is this explanation accurate? Does it capture the real reasoning? Once you're verifying AI's explanations, you're back in the loop. You've reinvented "coding with AI assistance," which already exists without the elaborate loop infrastructure.&lt;/p&gt;

&lt;p&gt;Yes, you can add "documentation updated" to your success criteria. But documentation generated after 30 iterations is summary, not reasoning trace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The fundamental problem:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Documentation is a transfer mechanism. It moves knowledge from one human to another, or from past-self to future-self. Ralph Loop's problem is knowledge non-creation. No human ever understood the code because no human was there when the decisions were made. There's nothing to transfer.&lt;/p&gt;

&lt;p&gt;You can add documentation or keep autonomous efficiency. If you add enough documentation to genuinely preserve understanding, you've eliminated the autonomy. If you keep the autonomy, the documentation is unverified noise providing false confidence.&lt;/p&gt;

&lt;p&gt;Pick one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Knowledge That Was Never Created
&lt;/h2&gt;

&lt;p&gt;The distinction that matters: Ralph Loop's problem is knowledge non-creation, not knowledge loss.&lt;/p&gt;

&lt;p&gt;With AI-assisted coding, a developer is at least present. They might not fully understand every line (that's the comprehension debt problem from Parts 1 and 2), but they have the opportunity to pause, question, and learn. The prevention strategies exist because there's a human in the loop who could apply them.&lt;/p&gt;

&lt;p&gt;When Ralph Loop generates code, no human is present during generation. The AI doesn't "understand" in the way humans do. It predicts tokens. When the loop completes, understanding doesn't exist anywhere. Not in any human (none were present). Not in any accessible form (the AI produces code, not explanations of its reasoning).&lt;/p&gt;

&lt;p&gt;Better documentation or knowledge management can't solve this. There's nothing to transfer. The knowledge gap can't be recovered through effort because there's nothing to recover.&lt;/p&gt;

&lt;p&gt;Consider debugging. Normally, debugging human-written code is also learning. You discover the original developer's reasoning: "Ah, they structured it this way because..." Ralph Loop code has no original reasoning to discover. You're not reverse-engineering intent. You're inventing intent, creating a mental model that never existed, hoping it matches what the code actually does.&lt;/p&gt;

&lt;p&gt;That's a fundamentally different cognitive task. And it's much harder.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Your Safeguards Won't Work Here
&lt;/h2&gt;

&lt;p&gt;This knowledge non-creation is why the strategies from Part 2 change character when applied to Ralph Loop.&lt;/p&gt;

&lt;p&gt;In Part 1 of this series, I explored the comprehension debt crisis: the widening gap between code we ship with AI assistance and code we actually understand. Part 2 covered prevention strategies: comprehension scoring, selective acceptance, forcing understanding through explanation.&lt;/p&gt;

&lt;p&gt;Those articles assumed knowledge existed somewhere initially. A developer used AI assistance, but they were present during generation. They could score their comprehension. They could force themselves to explain. The practices worked because a human was in the loop who could potentially understand. They just needed the discipline to actually do so.&lt;/p&gt;

&lt;p&gt;Ralph Loop breaks that assumption.&lt;/p&gt;

&lt;p&gt;You can apply Part 2's strategies to Ralph Loop output. Score your comprehension. Force yourself to explain. Reject code you don't understand. The strategies work.&lt;/p&gt;

&lt;p&gt;But they change character. Applied during AI-assisted development, they're prevention (integrated into your workflow, low cost per decision). Applied after Ralph Loop completes, they're remediation (a separate review phase for code that already exists).&lt;/p&gt;

&lt;p&gt;Prevention is already hard (Parts 1 and 2 covered why). Remediation is harder. When the code already exists and works, carving out time to deeply understand it feels like a luxury. The code ships. Understanding never happens.&lt;/p&gt;

&lt;p&gt;And even if you commit to thorough remediation, you face a time paradox: deeply understanding a large Ralph Loop output could take longer than writing the code incrementally with AI assistance. When you're present during generation, understanding is distributed across the work. After Ralph Loop completes, you're reverse-engineering a large volume of code all at once. The cognitive load is higher, the chunks are bigger, and the efficiency gain that justified using Ralph Loop disappears.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Capability Loss Pattern
&lt;/h2&gt;

&lt;p&gt;Teams adopt Ralph Loop for understandable reasons. The incentives are clear: ship faster, spend less, work autonomously. What's observable (code ships, builds succeed, costs are low) all favors adoption. What's unobservable (debugging burden, maintenance difficulty, future costs) all opposes it.&lt;/p&gt;

&lt;p&gt;This is rational response to available information. Add competitive pressure between teams (those not using Ralph Loop appear slower) and you get race-to-the-bottom dynamics even when risks are understood. The visible costs are immediate and measurable. The hidden costs are delayed and hard to quantify.&lt;/p&gt;

&lt;p&gt;But the pattern I've seen before isn't just accumulated debt. Debt implies eventual repayment. The deeper risk is organizational capability loss.&lt;/p&gt;

&lt;p&gt;The success stories rarely include six-month maintenance reports. The pattern is new enough that long-term data is scarce, and what exists hasn't been published. Adoption is outpacing evidence. That's precisely the danger.&lt;/p&gt;

&lt;p&gt;Play out the scenario: Ralph Loop becomes standard practice. Senior engineers who remember writing and understanding code leave or retire. New engineers join who've only worked with AI-generated codebases. The organization slowly loses the ability to understand its own systems. Eventually, no one remembers that understanding was once possible.&lt;/p&gt;

&lt;p&gt;This happened with aggressive outsourcing. Companies outsourced until no internal capability remained. When they needed to insource, they couldn't. Nobody understood the systems well enough to bring them back. They faced massive rewrites of functioning code, or lived with systems no one could safely modify.&lt;/p&gt;

&lt;p&gt;With Ralph Loop, if the pattern repeats, the timeline compresses. Outsourcing's capability loss took years. AI-generated codebases could become incomprehensible in months. The debt wouldn't just accumulate. You'd be losing the ability to repay it.&lt;/p&gt;




&lt;h2&gt;
  
  
  When the Trade-off Might Be Acceptable
&lt;/h2&gt;

&lt;p&gt;Despite my skepticism, Ralph Loop isn't always the wrong choice. Some contexts genuinely don't require long-term understanding:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prototypes you'll delete after validating the concept.&lt;/strong&gt; If you're testing feasibility, not building for production, speed matters more than comprehension. Throw it away when you're done. Actually throw it away; don't let it creep into production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Genuinely throwaway code.&lt;/strong&gt; Scripts with known expiration dates. One-time data migrations. Tools you'll delete next month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legacy code you're planning to sunset.&lt;/strong&gt; If you're building a replacement system, let Ralph Loop maintain the old one while you focus on the new. The code's days are numbered anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Well-isolated modules with comprehensive tests.&lt;/strong&gt; If the boundary is clear, the tests are exhaustive, and you can treat it as a black box indefinitely, understanding matters less. (But be honest about whether this actually describes your situation.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When you have time to study the output.&lt;/strong&gt; Ralph Loop overnight, then spend the next day reading and understanding the code before it matters. This works if you actually do it, but most teams under deadline pressure skip the study phase.&lt;/p&gt;

&lt;p&gt;Geoffrey Huntley, who created Ralph Loop, emphasizes that "operator skill matters." His methodology includes Plan.md files for specifications and Agents.md files to capture learnings across iterations. These help, but they capture intent and observed failures, not the AI's iteration-by-iteration reasoning. The prompt quality determines the outcome. Clear specifications, explicit success criteria, proper test coverage: these aren't optional decorations. They're mandatory safety mechanisms.&lt;/p&gt;

&lt;p&gt;Ralph Loop is a power tool. Power tools require skill, and the consequences of misuse aren't immediately visible.&lt;/p&gt;




&lt;h2&gt;
  
  
  When the Trade-off Matters Most
&lt;/h2&gt;

&lt;p&gt;The test is simple: if you'd eventually want a human to explain this code to a new team member, I'd be cautious about Ralph Loop.&lt;/p&gt;

&lt;p&gt;This includes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core business logic.&lt;/strong&gt; The code that makes your product your product. The algorithms, workflows, and domain rules that differentiate you. This needs long-term maintenance by humans who understand it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security-sensitive code.&lt;/strong&gt; Authentication, authorization, payment processing, data handling. You need an audit trail of decisions, not just a working implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team codebases.&lt;/strong&gt; Code that multiple people need to understand and modify. The knowledge gap multiplies with each person who didn't witness creation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Systems expected to live for years.&lt;/strong&gt; If this codebase will still exist in 2030, someone will need to understand it. Will anyone be able to?&lt;/p&gt;




&lt;h2&gt;
  
  
  Three Questions Before Deploying
&lt;/h2&gt;

&lt;p&gt;Outsourcing's correction came through pain. Companies didn't change practices because of thought leadership articles warning them. They changed because projects failed, systems broke, and debugging became impossible. The lessons were learned organization by organization, often after significant damage.&lt;/p&gt;

&lt;p&gt;If Ralph Loop follows the same pattern (and I think it might), the correction will come the same way. Warnings will be dismissed as Luddism. Teams will adopt because visible metrics favor it. The hidden costs will emerge gradually, then suddenly. Some organizations will learn from others' experience. Many will insist on learning from their own.&lt;/p&gt;

&lt;p&gt;Ralph Loop works. The code ships, tests pass, projects complete. But three months from now, can anyone maintain what shipped?&lt;/p&gt;

&lt;p&gt;Speed without understanding is borrowing from your future self. Ralph Loop makes the borrowing invisible, the interest rate hard to estimate, and the debt structure unclear.&lt;/p&gt;

&lt;p&gt;I could be wrong. Maybe the pattern won't repeat. Maybe the costs will be manageable. But I've seen enough similar mechanisms create delayed costs that I wouldn't bet production code on it.&lt;/p&gt;

&lt;p&gt;That's not an argument against ever using it. It's an argument for using it consciously, in contexts where the debt won't matter, and being honest about which contexts those actually are.&lt;/p&gt;

&lt;p&gt;If you wouldn't hire a contractor to build a load-bearing wall while you slept, hand you keys, and say "it's structurally sound but I can't explain why," apply the same skepticism here.&lt;/p&gt;

&lt;p&gt;Before using Ralph Loop (or before deploying its output), ask:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Does this codebase have a short, defined lifespan?&lt;/li&gt;
&lt;li&gt;Do automated tests catch architectural mistakes, not just syntax errors?&lt;/li&gt;
&lt;li&gt;Will someone review and understand the output before it matters?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;If any answer is "no" or "I don't know," proceed with caution.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If your team is evaluating autonomous AI coding patterns, bring these three questions to your next architecture meeting. The conversation is worth having before the costs become clear.&lt;/em&gt;&lt;/p&gt;




</description>
      <category>ai</category>
      <category>programming</category>
      <category>codequality</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>AI revealed that typing was never the actual bottleneck. Understanding was.
How much AI slop code do you think is actually because of this?
https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Sun, 25 Jan 2026 12:01:38 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/ai-revealed-that-typing-was-never-the-actual-bottleneck-understanding-was-how-much-ai-slop-code-148l</link>
      <guid>https://dev.to/thesystemistsimon/ai-revealed-that-typing-was-never-the-actual-bottleneck-understanding-was-how-much-ai-slop-code-148l</guid>
      <description>&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap" class="crayons-story__hidden-navigation-link"&gt;AI Writes Code 50% Faster. You Understand It 0% Better. It’s Called Comprehension Debt.&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/thesystemistsimon" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3727425%2F7750e974-a417-48bd-9f2e-9ef323d80dc0.png" alt="thesystemistsimon profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/thesystemistsimon" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Simon Wang
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Simon Wang
                
              
              &lt;div id="story-author-preview-content-3192383" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/thesystemistsimon" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3727425%2F7750e974-a417-48bd-9f2e-9ef323d80dc0.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Simon Wang&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jan 23&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap" id="article-link-3192383"&gt;
          AI Writes Code 50% Faster. You Understand It 0% Better. It’s Called Comprehension Debt.
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/coding"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;coding&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/comprehension"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;comprehension&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/technicaldebt"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;technicaldebt&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
            &lt;a href="https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            13 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;




</description>
    </item>
    <item>
      <title>Managing Comprehension Debt: How to Stop Shipping AI Code You Don't Understand</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Sun, 25 Jan 2026 08:19:48 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/comprehension-debt-how-to-stop-shipping-ai-code-you-dont-understand-2lpn</link>
      <guid>https://dev.to/thesystemistsimon/comprehension-debt-how-to-stop-shipping-ai-code-you-dont-understand-2lpn</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Image Photo by Vitaly Gariev on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A scoring system and team practices that make invisible debt visible before it compounds
&lt;/h2&gt;

&lt;p&gt;You ship AI code you don't understand. Your team does too. That's comprehension debt—and it's accumulating faster than you think.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap"&gt;Part 1&lt;/a&gt; explored why this happens. This article covers how to stop it: a scoring system and team practices that make invisible debt visible before it compounds.&lt;/p&gt;




&lt;h2&gt;
  
  
  This Only Works If...
&lt;/h2&gt;

&lt;p&gt;These practices require organizational support to maximize the gain: changing metrics, accepting short-term velocity drops, creating psychological safety. In my &lt;a href="https://itnext.io/thirty-years-five-technologies-one-failure-pattern-from-lean-to-ai-f3dc3a22a5d2" rel="noopener noreferrer"&gt;analysis of AI adoption patterns&lt;/a&gt;, only 5-10% of organizations successfully implement systematic change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three ways to use this guide:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-performing org?&lt;/strong&gt; Implement directly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Building change capacity?&lt;/strong&gt; Use as the vision to advocate upward.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Org resists all change?&lt;/strong&gt; Focus on building change capacity first. These practices won't overcome organizational resistance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Individual practices don't overcome organizational barriers. But within organizations that &lt;em&gt;can&lt;/em&gt; change, these are the practices that matter.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The challenge is using AI without sacrificing the understanding that makes code maintainable. &lt;/p&gt;

&lt;p&gt;Every technique in this article serves one principle: &lt;strong&gt;make incomprehension visible before code ships, not after.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What You Control
&lt;/h3&gt;

&lt;p&gt;Individual developers face real constraints: velocity metrics, sprint commitments, competing with peers. These practices acknowledge those constraints while building understanding where it matters most.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Score your comprehension.&lt;/strong&gt; This sounds bureaucratic, but it takes five seconds and changes behavior. Before accepting AI-generated code, rate your understanding on a simple scale. A score of 5 means you could teach this to a colleague right now. A 4 means you understand the design decisions and could modify confidently. A 3 means you get the main approach but would need time on edge cases. A 2 means you know what it does but not why it's structured this way. A 1 means you have no idea how this works.&lt;/p&gt;

&lt;p&gt;When you force yourself to assign a number, you can't pretend you understand something you don't.&lt;/p&gt;

&lt;p&gt;Don't use this as enforcement (that creates career risk). Use it as &lt;strong&gt;debt tracking&lt;/strong&gt;. Scores of 1-2 are comprehension debt you're consciously taking. Document it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Comprehension score: 2
# I don't fully understand the caching eviction strategy here.
# AI generated this based on "implement LRU cache" prompt.
# Future maintainer: review LRU vs LFU tradeoffs before modifying.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Over time, notice your patterns. If you consistently ship API endpoint code at score 2, you're building understanding debt in an area you touch frequently. That's data you can act on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Apply understanding selectively.&lt;/strong&gt; You can't deeply understand everything. Core infrastructure, security, and payment processing should require score 3+ before shipping. Boilerplate, test scaffolding, and configuration can ship at score 2 (you understand the pattern even if you didn't write every line). Prototypes and throwaway code can ship at score 1 (you're explicitly trading understanding for speed).&lt;/p&gt;

&lt;p&gt;The key is conscious choice. The problem isn't comprehension debt existing, it's accumulating it unconsciously in systems that will live for years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Force comprehension through writing.&lt;/strong&gt; After accepting AI-generated code, write 2-3 sentences explaining why it's structured that way:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"This uses a token bucket rate limiter because we need burst tolerance (user can make 10 requests instantly but limited to 100/hour overall). Alternative would be sliding window (stricter but more complex to implement)."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you can't write this explanation, you don't understand it well enough. The writing forces clarity. No teammate coordination required. This works async, at your own pace, with zero career risk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The reality of time pressure.&lt;/strong&gt; These practices slow you down. That's the point. Comprehension debt accumulates because we prioritize speed over understanding. The practices that prevent it require consciously choosing understanding over velocity, at least some of the time.&lt;/p&gt;

&lt;p&gt;The pressure to ship is real. But the velocity you gain from AI is borrowed from future maintenance capacity. Teams shipping fast with heavy AI assistance often hit a wall after 6-12 months. Velocity metrics look great initially. Then every feature takes longer than estimated because teams spend half their time understanding code they shipped months earlier.&lt;/p&gt;

&lt;p&gt;Teams adopting these practices may see initial velocity drops. But predictability improves dramatically. The choice isn't between fast with AI or slow without AI. It's between fast now and slow later, or slightly slower now and consistently fast ongoing.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to Accept the Debt
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Sometimes comprehension debt is the right trade-off. But make it a &lt;em&gt;conscious&lt;/em&gt; trade-off, not a default.&lt;/strong&gt; Like &lt;a href="https://itnext.io/technical-debt-isnt-about-discipline-it-s-about-compound-rates-5813a7cfcc9a" rel="noopener noreferrer"&gt;traditional technical debt&lt;/a&gt;, it's a tool. The question is: Are you taking it &lt;strong&gt;consciously&lt;/strong&gt; or &lt;strong&gt;unconsciously&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accept comprehension debt when:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You're prototyping.&lt;/strong&gt; Speed matters more than understanding. You might throw this code away. Score 1-2 is fine, just don't let the prototype become production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The code is genuinely temporary.&lt;/strong&gt; If it has a known sunset date, comprehension debt is acceptable risk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You're in spike mode.&lt;/strong&gt; Learning whether an approach is viable matters more than understanding every implementation detail.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Don't accept comprehension debt when:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;This is core infrastructure.&lt;/strong&gt; Authentication, payment processing, data integrity, understand these deeply before shipping.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You're building for long-term maintenance.&lt;/strong&gt; If this code will be modified frequently, comprehension debt compounds into impossibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;You're the only person who knows the domain.&lt;/strong&gt; Your departure creates a succession crisis (remember Sam's story from Part 1).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;The difference between traditional technical debt and comprehension debt:&lt;/strong&gt; Traditional technical debt is conscious ("I'm taking shortcuts in code structure"). Comprehension debt is usually unconscious ("I shipped code I don't understand"). Make it conscious. Document it. Track it. Review it quarterly: "Where did we accumulate comprehension debt? Was it worth it?"&lt;/p&gt;

&lt;p&gt;Once you're comfortable tracking your own comprehension, the team-level gaps become visible. You'll notice when code reviews miss understanding, when estimates ignore comprehension time, when retrospectives skip the "do we understand what we shipped?" question. That's when these practices become relevant.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You Can't Do Alone
&lt;/h3&gt;

&lt;p&gt;These practices require some organizational support but work within existing constraints. You don't need executive approval to start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Change code review culture.&lt;/strong&gt; This is the highest-leverage team practice. Reviews should verify understanding, not just correctness. A reviewer should be able to say "I understand why this is structured this way," not just "LGTM." Authors should expect to explain design choices, not just show passing tests. And "I don't understand this" becomes a valid reason to block a PR, not just "this has bugs."&lt;/p&gt;

&lt;p&gt;Start with one reviewer modeling this consistently. Cultural shifts begin with consistent individual behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Build understanding into estimates.&lt;/strong&gt; This is the most practical lever managers have. Don't estimate AI-assisted tasks at AI speed. Estimate at "AI + comprehension" speed. A feature that takes 3 hours with AI should be estimated as 5 hours. The extra 2 hours is for understanding, documentation, and explanation. Frame it to product as "investing in maintainability," not "going slower."&lt;/p&gt;

&lt;p&gt;Track this over 2-3 months. Show that features estimated this way have lower post-launch maintenance costs. That's your business case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Track maintenance burden by comprehension score.&lt;/strong&gt; When a bug takes 2 days to fix because nobody understood the code, note: "This was originally shipped at comprehension score 1." After 3 months, you'll have data showing that score-1 code has 3x the maintenance cost. That's evidence for changing practices, not just appeals to principle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Create psychological safety for knowledge gaps.&lt;/strong&gt; Teams won't surface comprehension debt if it's career-risky. Never punish "I don't fully understand this." Reward surfacing gaps early, before they cause outages. Model vulnerability: "I don't understand this either, let's learn together." This is cultural, not structural. You control team culture even if you don't control company metrics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Protect your team from velocity comparison.&lt;/strong&gt; When asked "Why does your team ship slower than Team B?", have data ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Our maintenance costs are 40% lower"&lt;/li&gt;
&lt;li&gt;"Our feature modification time is 2x faster"&lt;/li&gt;
&lt;li&gt;"Our bugs-per-feature rate is half theirs"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You're not slower. You're investing differently. But you need data to make this argument.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Monthly comprehension retrospectives.&lt;/strong&gt; Managers create the space; teams run it without managers present. Once a month, the team asks privately: "What code did we ship that nobody fully understands? Which gaps should we fix vs. consciously accept? Are we accumulating debt faster than we're paying it down?" This must be psychologically safe. No blame. No performance review material.&lt;/p&gt;

&lt;h3&gt;
  
  
  Starting Small
&lt;/h3&gt;

&lt;p&gt;Start with individual awareness. Use the comprehension scale on your own work for a month or two before proposing team changes. When you have data showing patterns (where you consistently score low, which areas have higher maintenance burden), you have credibility to suggest experiments.&lt;/p&gt;

&lt;p&gt;Pick one high-risk area, try "score 3+ required" for that area only, and track results. Let evidence drive expansion, not enthusiasm.&lt;/p&gt;

&lt;p&gt;If your organization won't move beyond individual practice, you've still improved. Conscious tracking beats unconscious accumulation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You're Actually Choosing
&lt;/h2&gt;

&lt;p&gt;The social contract of software development used to be simple: if you wrote it, you understood it. The act of writing guaranteed comprehension.&lt;/p&gt;

&lt;p&gt;AI broke that guarantee. Now you can ship code you don't understand. This isn't AI's fault; it's a tool. The question is whether we adapt our practices to maintain understanding or whether we let comprehension debt accumulate until codebases become unmaintainable.&lt;/p&gt;

&lt;p&gt;For developers who learned to code with AI assistance, this contract never existed. If you're in this position, you face a unique challenge: building understanding retroactively while continuing to produce. The practices in this article (explanation confidence scoring, delayed explanation, deliberate manual coding for core logic) aren't just about preventing debt. They're about building the foundational understanding that earlier generations developed by necessity.&lt;/p&gt;

&lt;p&gt;Teams struggle with this. The temptation to accept everything AI suggests is strong. The velocity gains are real. Management loves the metrics. But six months later, the team can't move fast because they don't understand their own codebase.&lt;/p&gt;

&lt;p&gt;Comprehension debt is more dangerous than traditional technical debt because it's invisible. Your tests pass. Your users are happy. Your velocity metrics look great. But your team is accumulating a maintenance bomb. When it explodes, you'll discover you've been shipping code without understanding.&lt;/p&gt;

&lt;p&gt;The GitClear data shows the symptoms: more churn, less refactoring, more time fixing recent code. The Uplevel data shows the bug rate climbing. These aren't future predictions. This is happening now, in production systems, across the industry.&lt;/p&gt;

&lt;p&gt;The uncomfortable truth is that AI can make us more productive, but only if we resist its most seductive feature: the ability to ship code faster than we can understand it. The 10x engineer in the AI era isn't the one who accepts every suggestion. It's the one who accepts only what they comprehend.&lt;/p&gt;

&lt;p&gt;This means saying no to velocity. It means taking time to understand. It means treating "I don't fully understand this" as a blocking issue, not a nice-to-have. It means changing code review culture from "does it work?" to "do we understand it?"&lt;/p&gt;

&lt;p&gt;In five years, we'll look back on this moment as critical. Either we learned to maintain understanding while using AI, or we built unmaintainable systems at unprecedented scale. The difference will be whether we valued understanding over velocity.&lt;/p&gt;

&lt;p&gt;Your codebase is accumulating comprehension debt right now. Every AI-generated function you don't fully understand. Every pattern you copied without grasping why. Every algorithm that works but you can't explain. It compounds silently until someone needs to modify the code.&lt;/p&gt;

&lt;p&gt;Then you'll discover the cost of shipping faster than you can comprehend.&lt;/p&gt;




&lt;h2&gt;
  
  
  Research Citations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Oregon State University Study (2025):&lt;/strong&gt;&lt;br&gt;
Qiao, Y., Hundhausen, C., Haque, S., &amp;amp; Shihab, M. I. H. (2025). Comprehension-performance gap in GenAI-assisted brownfield programming: A replication and extension. ArXiv preprint arXiv:2511.02922.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitClear Analysis:&lt;/strong&gt;&lt;br&gt;
GitClear. (2024). Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uplevel Research:&lt;/strong&gt;&lt;br&gt;
Uplevel. (2024). Analysis of GitHub Copilot Impact on Developer Productivity and Code Quality.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>comprehension</category>
      <category>technicaldebt</category>
    </item>
    <item>
      <title>AI Writes Code 50% Faster. You Understand It 0% Better. It’s Called Comprehension Debt.</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Fri, 23 Jan 2026 04:45:25 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap</link>
      <guid>https://dev.to/thesystemistsimon/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-2eap</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Image Photo by Vitaly Gariev on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The invisible cost that metrics don’t capture
&lt;/h2&gt;

&lt;p&gt;You shipped 2,000 lines of authentication code you don't understand. &lt;/p&gt;

&lt;p&gt;Not because you're junior. Not because the code is bad. Because GitHub Copilot wrote it in 30 seconds and you accepted it without building mental models.&lt;/p&gt;

&lt;p&gt;Tests passed. Code review passed. Production's fine. Eight months, zero bugs.&lt;/p&gt;

&lt;p&gt;But now you need to add OAuth support, and you're staring at code that works perfectly but you can't modify safely. You're reverse-engineering your own work.&lt;/p&gt;

&lt;p&gt;This is comprehension debt. And the gap between your code's velocity and your comprehension is growing exponentially.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Research Shows
&lt;/h2&gt;

&lt;p&gt;Researchers at Oregon State University quantified this precisely. In a controlled study, 18 computer science graduate students completed brownfield programming tasks (adding features to codebases they didn't write). Half used GitHub Copilot, half didn't. (The study used students, but the pattern matches what practitioners report in professional settings.)&lt;/p&gt;

&lt;p&gt;The results revealed the core of comprehension debt. Students using Copilot completed tasks nearly 50% faster and passed significantly more tests. Major productivity gains. But when researchers measured actual code comprehension (could they explain how the code worked, modify it effectively, debug issues), the scores were identical. Nearly 50% faster output. Zero comprehension gain.&lt;/p&gt;

&lt;p&gt;The researchers observed what they called "a fundamental shift in how developers engage with programming." The workflow changed from "read codebase → understand system → implement feature" to "describe need → accept AI suggestion → move on." That missing struggle (those hours debugging, those moments of confusion) is where understanding builds.&lt;/p&gt;

&lt;p&gt;In exit interviews, students using Copilot reported feeling productive but uncertain. They shipped working code but worried they didn't understand how or why it worked. This is comprehension debt forming in real-time: output without comprehension.&lt;/p&gt;

&lt;h2&gt;
  
  
  How This Plays Out in Real Teams
&lt;/h2&gt;

&lt;p&gt;This pattern plays out consistently. Consider an authentication system built by a senior developer with 10 years experience. Comprehensive test coverage. Clean code that follows all team standards. Passed code review by two other senior developers. In production for eight months. Zero bugs reported.&lt;/p&gt;

&lt;p&gt;Everyone did their job correctly. The problem isn't incompetence or poor practices.&lt;/p&gt;

&lt;p&gt;The problem: Nobody, including the senior developer who built it, can explain why it's designed this way. Why token buckets over sliding windows? Why this specific refresh token strategy? Why these database queries?&lt;/p&gt;

&lt;p&gt;The universal response: "I don't know. GitHub Copilot suggested it, tests passed, it works."&lt;/p&gt;

&lt;p&gt;This isn't a story about a bad team. It's about good teams making rational individual decisions that produce collective disaster.&lt;/p&gt;

&lt;p&gt;Now the team needs to add OAuth support. They're staring at 2,000 lines of code nobody ever understood. Not because they lacked discipline. Because the incentives made understanding optional and velocity mandatory.&lt;/p&gt;

&lt;h2&gt;
  
  
  About These Stories
&lt;/h2&gt;

&lt;p&gt;The scenarios in this article are composite illustrations, combining common patterns observed across AI adoption experiences. While the names and specific details are constructed, the dynamics they illustrate are real and representative. If you recognize your team in these patterns, you're not alone. These patterns emerge reliably because of how AI changes the relationship between writing and understanding code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comprehension Debt Is
&lt;/h2&gt;

&lt;p&gt;For fifty years, writing code and understanding code were the same activity. You couldn't write code without understanding it. The act of typing forced comprehension. You thought through the logic, considered edge cases, debugged the mental model.&lt;/p&gt;

&lt;p&gt;AI broke that coupling.&lt;/p&gt;

&lt;p&gt;You can now accept a 200-line function from Copilot in thirty seconds. You scan it, looks reasonable, tests pass, you ship it. But you didn't write it line by line. You didn't think through every branch. You didn't consider why this approach over alternatives.&lt;/p&gt;

&lt;p&gt;The code works perfectly. It's just you might not fully understand it.&lt;/p&gt;

&lt;p&gt;That gap between "code that works" and "code I understand" is comprehension debt. And it &lt;a href="https://itnext.io/technical-debt-isnt-about-discipline-it-s-about-compound-rates-5813a7cfcc9a" rel="noopener noreferrer"&gt;compounds&lt;/a&gt; faster than traditional technical debt.&lt;/p&gt;

&lt;p&gt;Traditional technical debt is code you understand but know is wrong. A quick hack, a shortcut, a placeholder you meant to fix. It's conscious. You took the debt deliberately (usually for speed). When it breaks, you know where to look because you understand the implementation.&lt;/p&gt;

&lt;p&gt;Comprehension debt is code that works but you don't understand why. It's unconscious accumulation. You didn't mean to take this debt; it just happened while you were moving fast with AI assistance. When it breaks, you can't easily fix it because you don't understand the implementation. You have to reverse-engineer your own codebase.&lt;/p&gt;

&lt;p&gt;This pattern plays out consistently. AI-generated code ships faster. Velocity metrics look great. Everyone's happy. Six months later, the original author leaves or moves to another project. The team inherits working code they can't modify safely. They're paralyzed by fear of breaking something they don't understand.&lt;/p&gt;

&lt;p&gt;The debt isn't visible in code reviews. The code looks fine (often better than what the team would write manually). It passes tests. It follows conventions. But nobody on the team has the deep understanding that comes from building something from scratch.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Comprehension Debt Differs From Legacy Code
&lt;/h3&gt;

&lt;p&gt;If this sounds familiar, it should. Developers have always had to read code they didn't write. Every codebase has modules nobody fully understands. This is the legacy code problem.&lt;/p&gt;

&lt;p&gt;But comprehension debt is different in three critical ways.&lt;/p&gt;

&lt;p&gt;First, the starting point. Legacy code becomes hard to understand over time. Someone understood it when they wrote it. Comprehension debt starts with code nobody fully understood, not even the person who "wrote" it.&lt;/p&gt;

&lt;p&gt;Second, the velocity. Legacy code accumulates gradually as authors leave and memories fade. Comprehension debt accumulates instantly, with any AI suggestion accepted without deep comprehension.&lt;/p&gt;

&lt;p&gt;Third, the preventability. Legacy code is hard to prevent: people leave, memories fade, codebases age. Comprehension debt is preventable at creation. Before you ship AI-generated code, you can choose to understand it.&lt;/p&gt;

&lt;p&gt;Fourth, the recoverability. Legacy code leaves an evolutionary trail. Git history shows how code grew from simple to complex, one decision at a time. Commit messages explain why changes were made. PR discussions capture alternatives considered. When you inherit legacy code, you have archaeology tools (&lt;code&gt;git log&lt;/code&gt;, &lt;code&gt;git blame&lt;/code&gt;, PR reviews) to reconstruct understanding.&lt;/p&gt;

&lt;p&gt;AI-generated code appears fully formed. One commit. Two thousand lines. Git history shows what appeared, not why it's designed this way. The commit message says "Add authentication" but not why token buckets over sliding windows, why this refresh strategy, why these specific database queries. &lt;code&gt;Git blame&lt;/code&gt; points to the developer who accepted AI suggestions, but they can't explain the reasoning because they never made those decisions. The AI did, and AI doesn't write commit messages explaining its probabilistic choices.&lt;/p&gt;

&lt;p&gt;Legacy code loses understanding over time, but leaves a trail to recover it. AI-generated code never builds understanding, and leaves no trail to reconstruct it. The evolutionary scaffolding that normally helps you understand "why this approach over alternatives" doesn't exist. Understanding wasn't just lost; it was never captured anywhere.&lt;/p&gt;

&lt;p&gt;The symptoms look similar: "I need to modify code I don't understand." But the solution timing is different. Legacy code requires ongoing maintenance to prevent understanding loss. Comprehension debt requires conscious practice at creation to build understanding in the first place.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Creates Comprehension Debt
&lt;/h2&gt;

&lt;p&gt;Comprehension debt accumulates through four mechanisms in codebases using AI coding assistants extensively.&lt;/p&gt;

&lt;p&gt;The first mechanism is autocomplete acceptance. Copilot suggests a complete function. You scan it, logic looks right, variable names make sense, handles obvious edge cases. You accept it. Thirty seconds reviewing what would have taken thirty minutes to write. But you verified correctness without absorbing design decisions. Six months later, you need to modify it and realize you never understood why it was structured that way.&lt;/p&gt;

&lt;p&gt;The second mechanism is black box solutions. You describe a problem to Claude or ChatGPT. It generates a working solution. You test it, it works, you ship it. But you don't understand the algorithm or why this approach over others. Ask AI for a caching strategy and you'll get an LRU cache with eviction policies. The code works. But do you understand when LRU is right versus LFU or random eviction? Or did you just ship "a caching solution that works"?&lt;/p&gt;

&lt;p&gt;The third mechanism is pattern copying without context. AI suggests patterns from its training data. The pattern works in other contexts. It might even be best practice somewhere. But is it right for your codebase? Does it fit your architecture? Does it align with your team's conventions? Teams can accumulate five different error handling patterns because AI suggests slightly different approaches in different files. Each pattern works fine. But nobody chose a consistent approach. The codebase becomes internally inconsistent and nobody knows why.&lt;/p&gt;

&lt;p&gt;The fourth mechanism is understanding lag. This one is subtle but deadly. Code generation outpaces comprehension. Pre-AI, you might ship five features per month. Your understanding grew at five features per month because you built each one, so you understood each one. With AI, you might ship twelve features per month. But your comprehension still grows at five features per month. That's the rate at which you can deeply understand complex systems. The gap is seven features per month that you shipped but don't fully understand.&lt;/p&gt;

&lt;p&gt;After a year, you have 84 features in your codebase that you shipped but never fully absorbed. That's comprehension debt accumulating at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Good Teams Still Accumulate Comprehension Debt
&lt;/h2&gt;

&lt;p&gt;The practices that prevent comprehension debt are obvious: understand code before shipping, enforce comprehension in code review, document design decisions. Every experienced developer knows this.&lt;/p&gt;

&lt;p&gt;So why does comprehension debt accumulate even in good teams?&lt;/p&gt;

&lt;p&gt;Because the forces that create comprehension debt aren't about individual failures. They're about systemic incentives that make accumulation rational, even inevitable, despite everyone knowing better.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Velocity Measurement Trap
&lt;/h3&gt;

&lt;p&gt;Your sprint dashboard shows story points shipped. Your team's performance is measured by features delivered. Comprehension isn't measured because it's invisible.&lt;/p&gt;

&lt;p&gt;When a developer uses AI and ships 50% faster, they look productive. When they slow down to understand deeply, they look inefficient. Even if you personally value understanding, your manager sees velocity metrics. And velocity is how teams are compared, how performance reviews are written, how promotions are decided.&lt;/p&gt;

&lt;p&gt;The result: Even if you want to enforce understanding, organizational pressure pushes toward accepting AI code quickly. "Why did this take so long?" is a question developers face. "Do you fully understand this?" is not.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Delayed Cost Problem
&lt;/h3&gt;

&lt;p&gt;AI's benefits are immediate: 50% faster shipping, features delivered this sprint, velocity metrics that look great today.&lt;/p&gt;

&lt;p&gt;Comprehension debt's costs are delayed: maintenance burden in six months, debugging difficulty next quarter, modification paralysis next year. Those costs appear in different sprints, attributed to different people, blamed on "legacy code" rather than creation practices.&lt;/p&gt;

&lt;p&gt;The developer who accepts AI code without understanding gets credit for velocity. The developer who inherits that code six months later gets blamed for slow delivery. The system rewards creating debt and punishes paying it.&lt;/p&gt;

&lt;p&gt;Sprint retrospectives celebrate velocity. Nobody celebrates "we shipped slower but built deep understanding that will pay off in six months." That's not how teams are measured.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tragedy of the Commons
&lt;/h3&gt;

&lt;p&gt;Comprehension debt is a tragedy of the commons problem. The commons is codebase maintainability. Each developer makes individually rational choices that collectively destroy maintainability.&lt;/p&gt;

&lt;p&gt;Individual developer's calculation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Time to understand deeply: 5 hours&lt;/li&gt;
&lt;li&gt;Benefit to me: Minimal (I might not modify this code again)&lt;/li&gt;
&lt;li&gt;Cost to me: Manager unhappy about slow delivery, missed sprint commitment&lt;/li&gt;
&lt;li&gt;Rational choice: Accept without full understanding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Multiply this by 20 developers making this same rational choice across 20 modules. Collective result: Nobody can maintain the codebase. But no individual developer made an irrational choice given their incentives.&lt;/p&gt;

&lt;p&gt;This is why "just understand code before shipping" doesn't work. It assumes individual discipline can overcome systemic incentives. It can't. Not sustainably.&lt;/p&gt;

&lt;h3&gt;
  
  
  The False Confidence Feedback Loop
&lt;/h3&gt;

&lt;p&gt;Tests pass. Code review approves. The code works in production. Every signal says "success."&lt;/p&gt;

&lt;p&gt;There's no feedback signal that comprehension debt is accumulating. Unlike technical debt (where code quality degradation is visible) or security debt (where vulnerabilities can be scanned), comprehension debt is completely invisible until someone needs to modify the code.&lt;/p&gt;

&lt;p&gt;By the time you discover comprehension debt (when modification is needed), it's too late. The debt is already accumulated. The cost is high.&lt;/p&gt;

&lt;p&gt;This creates a false confidence loop: Ship fast with AI → Everything works → Ship faster → More comprehension debt → Everything still works → Until suddenly modification becomes impossibly expensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  When It Breaks
&lt;/h2&gt;

&lt;h3&gt;
  
  
  When Transitions Reveal the Gap
&lt;/h3&gt;

&lt;p&gt;Comprehension debt's most visible consequence appears when key people leave.&lt;/p&gt;

&lt;p&gt;When Sam, the tech lead, announced a family relocation with two weeks' notice, the team scrambled to prepare. Sam had shepherded them through six months of AI adoption: reviewing hundreds of Copilot suggestions, building out the architecture, making the codebase what it was. The code worked beautifully. Tests passed. Customers were happy.&lt;/p&gt;

&lt;p&gt;Then Raj arrived as Sam's replacement and asked standard onboarding questions: "Why did we choose this authentication pattern? What alternatives did you consider? How does the payment flow handle edge cases?"&lt;/p&gt;

&lt;p&gt;The team couldn't answer. Not because Sam had hoarded knowledge (Sam had been collaborative and communicative throughout). But because those architectural decisions had never been made by a human in the traditional sense. Sam had evaluated and accepted AI suggestions, but the reasoning for why those patterns worked lived in neither documentation nor human memory. Sam had understood enough to judge the suggestions as "good enough," but not enough to explain the full reasoning behind them.&lt;/p&gt;

&lt;p&gt;Forty percent of the codebase was AI-generated. Raj spent three months reconstructing architectural context that no one had explicitly created. What looked like a succession planning failure was actually an comprehension debt crisis. The knowledge gap existed not because people failed to document, but because there was less explicit human reasoning to document in the first place.&lt;/p&gt;

&lt;p&gt;The team eventually recovered. Raj forced systematic documentation of every design decision going forward (not just what was built, but why alternatives were rejected). The process made code reviews slower but rebuilt the shared understanding the team needed. The silver lining: the team became more resilient to future transitions. But it cost three months of leadership capacity on archaeology that should have been architecture.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If your most experienced AI-using team member left tomorrow, could the rest of the team maintain the codebase? What percentage was human-decided versus AI-suggested-and-accepted?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Comprehension Debt Compounds During Growth
&lt;/h3&gt;

&lt;p&gt;Comprehension debt doesn't just affect individuals; it compounds across teams, especially during growth.&lt;/p&gt;

&lt;p&gt;One organization doubled their engineering team from six to twelve while adopting AI. The original six engineers understood their AI-generated codebase moderately well (maybe 8 out of 10 on a comprehension scale). They could explain most decisions, debug most issues, and maintain most features.&lt;/p&gt;

&lt;p&gt;They hired Sarah and David, who learned from the original six. But Sarah and David's understanding landed around 6 out of 10. They'd learned "use AI, ship fast" without the foundational context of why certain patterns mattered. Still functional, but shallower.&lt;/p&gt;

&lt;p&gt;When Sarah and David mentored the next wave of hires (Priya and Miguel), the knowledge degraded further. Priya and Miguel learned from people who were themselves still learning. Their understanding: roughly 4 out of 10. They knew how to use the tools, but not why things worked the way they did.&lt;/p&gt;

&lt;p&gt;By the third generation of hires (Lisa and Carlos, who learned from Priya and Miguel), understanding had dropped to 2 out of 10. Lisa and Carlos shipped code they described as "mystery boxes." When asked to explain their implementations, they'd shrug: "It works. The AI generated it. Tests pass."&lt;/p&gt;

&lt;p&gt;Six months later, both Lisa and Carlos left. Their exit interviews cited "not being good enough." The reality: they were talented engineers placed in an impossible situation. The system had accumulated comprehension debt faster than it could transfer knowledge. This isn't a story about individual failure; it's about what happens when systems scale during disruption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key takeaway:&lt;/strong&gt; Don't scale your team while accumulating comprehension debt. Each knowledge transfer generation loses understanding.&lt;/p&gt;

&lt;p&gt;This isn't theoretical. GitClear published data from analyzing code written with GitHub Copilot across multiple organizations. They found increased "code churn" (code modified shortly after being written), more copy-pasted code, and less refactoring. This pattern is consistent with comprehension debt, though the data doesn't isolate the cause.&lt;/p&gt;

&lt;p&gt;They also found developers spend less time refactoring and more time fixing recent code. The pattern suggests teams are shipping code faster but understanding it worse. The technical debt isn't in code quality; the generated code is often fine. The debt is in comprehension.&lt;/p&gt;

&lt;p&gt;Uplevel's research found developers using AI coding assistants introduced more bugs into production. The data doesn't isolate why, but the pattern is consistent with accepting code without fully understanding it.&lt;/p&gt;

&lt;p&gt;These aren't productivity gains. These are productivity illusions. You ship faster but maintain slower. The velocity appears in sprint metrics. The cost appears in six-month maintenance windows.&lt;/p&gt;

&lt;p&gt;This is the compounding problem. Comprehension debt doesn't just accumulate in the code you ship today. It degrades the knowledge across your team over time, making every future developer less effective. The velocity gains from AI come with an understanding cost that multiplies across team generations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The question isn't whether comprehension debt will accumulate, with AI tools, it's inevitable.&lt;/strong&gt; The question is whether we'll accumulate it consciously or unconsciously, whether we'll manage it strategically or let it manage us.&lt;/p&gt;

&lt;h2&gt;
  
  
  One Thing You Can Do Today
&lt;/h2&gt;

&lt;p&gt;Before we get to systematic solutions in Part 2, here's one practice you can start immediately: Before accepting AI-generated code, ask yourself one question: &lt;em&gt;"Could I explain to a colleague why this code is designed this way?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If the answer is no, you have two choices: spend time understanding it, or don't ship it.&lt;/p&gt;

&lt;p&gt;This simple gate catches comprehension debt at the source. It won't solve the organizational pressures that created this problem, but it gives you agency while we work on systemic solutions.&lt;/p&gt;




&lt;p&gt;In my next article, I'll share what high-performing organizations do differently: practices that address the root causes while acknowledging real constraints (career risk, velocity pressure, organizational resistance). These aren't universal solutions, 95% of organizations struggle with any systematic change, and these practices won't overcome that. But for organizations with change capacity, or those building it explicitly, this is the path that research suggests works.&lt;/p&gt;

&lt;p&gt;But first: do you recognize this pattern in your own codebase? How much of your team's code would pass the "can you explain why it's structured this way?" test? That's where the work begins.&lt;/p&gt;







&lt;h2&gt;
  
  
  Why This Is Hard
&lt;/h2&gt;

&lt;p&gt;The comprehension debt crisis exemplifies a broader pattern I explored in "&lt;a href="https://itnext.io/thirty-years-five-technologies-one-failure-pattern-from-lean-to-ai-f3dc3a22a5d2" rel="noopener noreferrer"&gt;Thirty Years, Five Technologies, One Failure Pattern: From Lean to AI&lt;/a&gt;." That article documents how 95% of AI transformations fail, not because of technical limitations, but because organizational systems aren't designed to integrate systematic change.&lt;/p&gt;

&lt;p&gt;The same barriers that prevented Lean Manufacturing adoption (2% success rate), Agile transformations (70% failure rate), and Electronic Health Records integration (96% adoption, zero improvement in clinician burnout) are now blocking AI adoption. Organizations measure velocity but not understanding. They reward speed over comprehension. They optimize for existing metrics even when those metrics conflict with new practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This means there's no easy fix.&lt;/strong&gt; If your organization has struggled with any systematic change to how work gets done (Lean, Agile, DevOps, test-driven development), that pattern will repeat here. Comprehension debt is a symptom. Organizational change capacity is the root cause.&lt;/p&gt;

&lt;p&gt;My next article won't pretend to solve the organizational problem, that requires executive commitment, incentive restructuring, and years of culture change that only 5-10% of organizations achieve. What I will share is what practitioners and teams can do &lt;em&gt;within&lt;/em&gt; current constraints: harm reduction practices that slow the accumulation, build evidence for change, and protect understanding where you have control.&lt;/p&gt;

&lt;p&gt;It's not transformation. It's survival. But sometimes survival buys you time to build the case for transformation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Research Citations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Oregon State University Study (2025):&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Qiao, Y., Hundhausen, C., Haque, S., &amp;amp; Shihab, M. I. H. (2025). Comprehension-performance gap in GenAI-assisted brownfield programming: A replication and extension. ArXiv preprint arXiv:2511.02922.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitClear Analysis:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
GitClear. (2024). Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Uplevel Research:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Uplevel. (2024). Analysis of GitHub Copilot Impact on Developer Productivity and Code Quality.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Originally published on ITNEXT: &lt;a href="https://itnext.io/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-78d99c0cbc0c" rel="noopener noreferrer"&gt;https://itnext.io/50-faster-code-0-better-understanding-the-comprehension-debt-crisis-78d99c0cbc0c&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>ai</category>
      <category>coding</category>
      <category>comprehension</category>
      <category>technicaldebt</category>
    </item>
    <item>
      <title>Complexity Can't Be Eliminated. It Can Only Be Moved.</title>
      <dc:creator>Simon Wang</dc:creator>
      <pubDate>Fri, 23 Jan 2026 04:09:33 +0000</pubDate>
      <link>https://dev.to/thesystemistsimon/complexity-cant-be-eliminated-it-can-only-be-moved-2am7</link>
      <guid>https://dev.to/thesystemistsimon/complexity-cant-be-eliminated-it-can-only-be-moved-2am7</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover Image Photo by Sunder Muthukumaran on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Six patterns that determine where complexity lives in your codebase — and how to choose consciously
&lt;/h2&gt;

&lt;p&gt;A senior engineer made our API faster by caching responses. Query time dropped 80%. We celebrated.&lt;/p&gt;

&lt;p&gt;Two months later, the cache was stale. Data was wrong. Users complained. We spent weeks debugging cache invalidation.&lt;/p&gt;

&lt;p&gt;The speed didn't come from nowhere. The complexity didn't disappear. We just moved it.&lt;/p&gt;

&lt;p&gt;This pattern behaves like a conservation law from physics. Not perfectly, but close enough to be useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Complexity Relocates (Not Disappears)
&lt;/h2&gt;

&lt;p&gt;In physics, certain quantities can't be created or destroyed. Only transformed or moved. Energy conservation says energy can't be created or destroyed, only converted (chemical to kinetic, kinetic to heat). Momentum conservation says total momentum stays constant in a closed system. Mass conservation says mass doesn't appear or disappear, just rearranges.&lt;/p&gt;

&lt;p&gt;These aren't guidelines. They're laws. You can't violate them. You can only work within them.&lt;/p&gt;

&lt;p&gt;Software has something similar: &lt;em&gt;essential complexity&lt;/em&gt; (the inherent difficulty your problem requires) can only move, not disappear. Larry Tesler famously called it "Conservation of Complexity": complexity can't be eliminated, only moved. UX designers know Tesler's Law intimately. But while this principle is well-recognized in design circles, software architects rarely discuss it explicitly or apply it systematically.&lt;/p&gt;

&lt;p&gt;I've noticed we treat "simplification" as if we're eliminating complexity rather than relocating it. We don't measure both sides of the trade. We don't name what's actually being relocated.&lt;/p&gt;

&lt;p&gt;This isn't quite like physics conservation laws, where total energy stays exactly constant. Software complexity can increase or decrease. But there's a pattern, and a floor.&lt;/p&gt;

&lt;p&gt;Every problem has &lt;strong&gt;essential complexity&lt;/strong&gt;, what Fred Brooks called the inherent difficulty of what you're trying to solve. Authentication must verify identity. Distributed systems must coordinate. These requirements create complexity that can only relocate, or be eliminated by dropping features entirely. You can't design it away.&lt;/p&gt;

&lt;p&gt;Then there's &lt;strong&gt;accidental complexity&lt;/strong&gt;, from how we implement solutions. Poor abstractions, unnecessary indirection, tech debt. This can be eliminated through better design.&lt;/p&gt;

&lt;p&gt;When net complexity increases (code drops 40%, config grows 60%, net +20%), you're seeing accidental complexity added during relocation. When complexity genuinely disappears (deleting 500 lines of dead code), you're removing accidental complexity that never contributed to solving the problem.&lt;/p&gt;

&lt;p&gt;The pattern: essential complexity moves. Accidental complexity varies. And there's a floor: you can't simplify below essential complexity without losing functionality.&lt;/p&gt;

&lt;p&gt;To be precise: when we say "complexity relocates," we mean essential complexity (the irreducible difficulty of your problem domain). You can't simplify a tax calculation system below the complexity of the tax code itself. You can only choose where that essential complexity lives in your architecture.&lt;/p&gt;

&lt;p&gt;This explains why some systems resist simplification. You're not fighting bad design. You're hitting essential complexity. The question shifts: &lt;strong&gt;Where should this essential complexity live to minimize total cost?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you "simplify" a system, you're not eliminating complexity. You're relocating it. When you make a decision configurable instead of hardcoded, you haven't reduced the number of decisions. You've moved where the decision happens. When you cache data, you haven't eliminated the work of keeping data fresh. You've transformed query complexity into cache invalidation complexity.&lt;/p&gt;

&lt;p&gt;Understanding relocation patterns changes how you think about software design. You stop asking "how do I eliminate this complexity?" and start asking "where do I want this complexity to live?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Six patterns emerge consistently.&lt;/strong&gt; We'll call them relocation patterns that behave like conservation laws. Not physics-perfect, but strong enough to guide architectural decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Six Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pattern 1: Complexity Relocation
&lt;/h3&gt;

&lt;p&gt;The caching story is a perfect example. Before caching, we had high query complexity: every request hit the database, queries were slow, load was high. Cache management complexity was zero because we didn't have a cache. After caching, query complexity dropped dramatically. Requests were fast, database load was low. But cache management complexity exploded. We now had staleness issues, invalidation logic, consistency problems, memory pressure.&lt;/p&gt;

&lt;p&gt;Total complexity didn't decrease. We moved it from "slow queries" to "cache management." The system felt simpler in one dimension and more complex in another. The essential complexity of data consistency didn't disappear. It moved from query time to cache invalidation. But if your cache implementation is inefficient, you've added accidental complexity on top.&lt;/p&gt;

&lt;p&gt;I've learned you can't eliminate complexity. You can only move it. The question isn't "how do I make this simpler?" The question is "where should this complexity live?"&lt;/p&gt;

&lt;p&gt;Consider adding an abstraction layer. Before abstraction, you have high duplication complexity: the same database query logic appears in twenty places. But you have low abstraction complexity because there's no layer to understand. After creating an ORM, duplication complexity drops to near zero. Database logic lives in one place. But abstraction complexity rises. Now you need to understand the ORM, its query builder, its caching behavior, its transaction handling.&lt;/p&gt;

&lt;p&gt;You didn't reduce total complexity. You traded duplication complexity for abstraction complexity. The essential complexity of database operations remains. You just centralized where it lives. Whether abstraction adds accidental complexity depends on design quality. &lt;/p&gt;

&lt;p&gt;Whether that's a good trade? Depends on your context. For a system with many developers, centralizing complexity in an abstraction that a few people deeply understand might be better than distributing complexity across the codebase where everyone encounters it. For a tiny system with two developers, the abstraction might not be worth it: the duplication is manageable, the abstraction is overhead.&lt;/p&gt;

&lt;p&gt;This is why "simplification" is such a loaded term. When someone says "let's simplify this," what they usually mean is "let's move complexity from where it bothers me to somewhere else." (Which, to be fair, is sometimes exactly what you want.) But recognize you're relocating complexity, not eliminating it.&lt;/p&gt;

&lt;p&gt;Where can complexity go? You can push it to infrastructure: move complexity from application code to Kubernetes, but now you need to understand Kubernetes. You can push it to configuration: move complexity from code to config files, but now configuration management becomes complex. You can push it to runtime: use dynamic dispatch instead of explicit wiring, but behavior becomes harder to trace. You can push it to operations: microservices simplify individual services but operational complexity explodes.&lt;/p&gt;

&lt;p&gt;The complexity goes somewhere. It doesn't vanish. Choose consciously where you want it to hurt least.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 2: Knowledge Relocation
&lt;/h3&gt;

&lt;p&gt;Knowledge can't be reduced, only relocated. You can't reduce what needs to be known about a system. You can only change where that knowledge lives.&lt;/p&gt;

&lt;p&gt;Take abstraction layers again: before adding an ORM, knowledge about database queries is distributed across every function that touches the database. After adding an ORM, that knowledge concentrates in the ORM layer. Total knowledge hasn't decreased. You still need to understand how queries work, how connections are managed, how errors are handled. You've just relocated the knowledge.&lt;/p&gt;

&lt;p&gt;This creates a trade-off. Distributed knowledge means each piece is simple: local context is enough to understand what's happening. But finding patterns is hard because knowledge is scattered. Global understanding requires synthesizing information from many places.&lt;/p&gt;

&lt;p&gt;Concentrated knowledge means finding answers is easy: look in the abstraction layer. But each piece is more complex: the ORM is harder to understand than any individual query was. Which distribution is better depends on your team, your system, your change patterns.&lt;/p&gt;

&lt;p&gt;When a new developer asks where logic lives, I can say "check the ORM" instead of "check twenty controllers." Same knowledge needed, better location. But now that developer needs to understand the ORM's complexity.&lt;/p&gt;

&lt;p&gt;I've seen teams struggle with this trade-off. A microservices architecture distributes knowledge across service boundaries. Each service is simpler to understand in isolation, but understanding cross-service workflows requires mental synthesis of multiple codebases. A monolith centralizes that knowledge. You can trace a request end-to-end in one codebase, but the concentration makes the monolith harder to navigate.&lt;/p&gt;

&lt;p&gt;The knowledge exists either way. The question is: where does it hurt least? If you have autonomous teams, distributing knowledge across service boundaries might work. If you have frequent cross-cutting changes, centralizing knowledge in a monolith might be better. You're not reducing knowledge. You're choosing where developers encounter it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 3: Decision Relocation
&lt;/h3&gt;

&lt;p&gt;Decisions can't be eliminated. Every decision must be made somewhere. Moving where decisions happen doesn't reduce total decisions.&lt;/p&gt;

&lt;p&gt;Consider configuration. You have a decision: "Which database connection string to use?" You can make it in code: if environment equals production, use this connection; otherwise use that one. Or you can make it in config: read from environment variable or config file. Same decision. Different location. Someone still decides what the database URL is. The decision moved from code to configuration. It didn't disappear.&lt;/p&gt;

&lt;p&gt;The choice of where to make decisions has consequences. Compile-time decisions mean fast runtime but slow development: changing behavior requires changing code. Runtime decisions mean slow runtime but fast iteration: change config and restart. Configuration-time decisions mean flexible behavior but configuration becomes complex: now you have configuration management, templating, validation. Convention-based decisions mean simple explicit code but you must learn the conventions. "Magic" behavior that's invisible until you know the pattern.&lt;/p&gt;

&lt;p&gt;I've debugged systems where configuration grew so complex it became code by another name. YAML files with conditionals, includes, variable substitution. Essentially a programming language without the tooling. The decisions didn't decrease; they just moved to a less maintainable place.&lt;/p&gt;

&lt;p&gt;The reverse is also true. Hard-coding decisions in code means every environment difference requires a code change. I've seen teams with many if-statements checking environment variables because they never moved decisions to configuration. Same total decisions, worse location.&lt;/p&gt;

&lt;p&gt;Feature flags are the modern version of this trade-off. You move decisions from deploy time (merge to production) to runtime (toggle in a dashboard). This gives you safety and speed. You can deploy dark and enable gradually. But you pay in testing complexity: with N flags, you have 2^N possible system states. Three flags mean eight configurations to test. Ten flags mean 1,024. The decision didn't disappear. It multiplied.&lt;/p&gt;

&lt;p&gt;Pick where decisions happen based on who needs to change them and how often. If operators need to change behavior without deploying code, configuration makes sense. If developers need to understand decision logic during debugging, code makes sense. If the decision rarely changes, hard-coding might be fine. You're not reducing decisions. You're choosing who makes them and when.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 4: Failure Mode Transformation
&lt;/h3&gt;

&lt;p&gt;Failure modes can't be eliminated. They can only be transformed. You can't eliminate how systems fail. You can only trade failure modes you understand for failure modes you don't.&lt;/p&gt;

&lt;p&gt;Moving from synchronous to asynchronous is classic. Synchronous systems fail with timeouts, deadlocks, resource exhaustion when threads block. Asynchronous systems fail with message loss when queues drop messages, ordering issues when messages arrive out of sequence, partial failures when some operations complete and others don't. You traded known failures for different failures. Total failure surface area might even increase.&lt;/p&gt;

&lt;p&gt;I've debugged async message loss that took days to track down. With sync systems, timeouts show up immediately in logs. I'm not saying one is better. I'm saying they fail differently, and you're choosing which failure mode you'd rather debug.&lt;/p&gt;

&lt;p&gt;The same pattern appears everywhere. Move from monolith to microservices? You trade in-process call failures (immediate stack traces) for network call failures (distributed tracing, timeouts, partial failures). Move from SQL to NoSQL? You trade constraint violations (database enforces referential integrity) for data inconsistency (application must enforce integrity).&lt;/p&gt;

&lt;p&gt;I've watched teams adopt new technologies expecting them to be "more reliable," then spend months learning their failure modes. The new system wasn't less reliable. It just failed differently. And the team's existing monitoring, debugging practices, and mental models were all tuned to the old failure modes.&lt;/p&gt;

&lt;p&gt;This doesn't mean you shouldn't go async, or adopt microservices, or use NoSQL. It means recognize the trade-off. You're not eliminating failure modes: you're choosing which failure modes you'd rather handle. Maybe async failures are easier to handle in your context. Maybe you have better tools for debugging message loss than deadlocks. Maybe your team has experience with distributed systems failure modes. That's a valid trade. Just don't pretend the old failure modes disappeared: they transformed into new ones. And plan to invest in learning how the new system fails.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 5: Testing Burden Relocation
&lt;/h3&gt;

&lt;p&gt;Testing burden can't be reduced, only relocated. You can't reduce what needs to be tested. You can only move where testing happens.&lt;/p&gt;

&lt;p&gt;Type systems are the clearest example. Without static types, you need more runtime tests because type verification happens at runtime: tests must verify both types and logic. With static types, you need fewer runtime tests because type verification happens at compile time: tests verify logic only, types are checked by the compiler.&lt;/p&gt;

&lt;p&gt;Testing effort didn't disappear. It moved from runtime tests to compile-time checks. The shift has trade-offs. Compile-time verification gives faster feedback: you know about type errors before running code. But it adds compilation overhead and can't test runtime-only behaviors like "does this API actually return the structure we expect?" Runtime testing gives slower feedback but tests actual system behavior. Same amount of verification work. Different timing.&lt;/p&gt;

&lt;p&gt;The same pattern appears with integration vs. unit tests. Heavy integration testing means you verify actual system behavior but tests are slow and brittle. Heavy unit testing with mocks means tests are fast and isolated but you need integration tests anyway to verify the mocks match reality. The testing burden didn't change. You're choosing between "test real interactions slowly" and "test mock interactions quickly plus verify mocks match."&lt;/p&gt;

&lt;p&gt;I've seen teams swing between extremes. All integration tests: comprehensive but painfully slow, so developers avoid running them. All unit tests with mocks: fast but brittle when mocks drift from reality, leading to "tests pass but production fails." The burden exists either way.&lt;/p&gt;

&lt;p&gt;The question is: where do you want verification to happen? Early in development (static types, unit tests, compile-time checks) or late in deployment (runtime tests, integration tests, production monitoring)? Each approach has different feedback loops and different failure modes. You're not reducing testing. You're choosing when you discover problems and how much machinery you need to discover them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern 6: Assumption Visibility Trade-off
&lt;/h3&gt;

&lt;p&gt;Assumptions can't be eliminated, only made explicit or implicit. You can't reduce assumptions. You can only change their visibility.&lt;/p&gt;

&lt;p&gt;An implicit assumption looks like this: a function expects user.email to exist and be a string. The code just calls user.email.lower() and hopes. An explicit assumption documents it: add type hints, add null checks, add validation. Same assumption: user must have an email that's a string. Now it's visible instead of hidden.&lt;/p&gt;

&lt;p&gt;Implicit assumptions are cheaper to write but expensive to debug. When they're violated, you get cryptic errors: AttributeError: 'NoneType' has no attribute 'lower'. You have to trace back to figure out the assumption. Explicit assumptions are expensive to write but cheap to debug. When they're violated, you get clear errors: ValueError: User must have email. Total cost is conserved. You're choosing when to pay it: upfront with explicit checks, or later when debugging implicit assumptions.&lt;/p&gt;

&lt;p&gt;The same trade-off appears with API contracts. Implicit contracts mean less documentation, less validation code, faster development. But when clients violate expectations, you get runtime failures that are hard to diagnose. Explicit contracts mean more upfront work (OpenAPI specs, request validation, comprehensive error messages) but violations are caught immediately with clear feedback.&lt;/p&gt;

&lt;p&gt;I've debugged production issues that took hours to diagnose because assumptions were buried deep in code. "Why does this fail for some users but not others?" Eventually you discover an implicit assumption: the code assumes users have an email, but imported users from legacy systems don't. The assumption existed either way. It just wasn't visible until it broke.&lt;/p&gt;

&lt;p&gt;The question is: where do you want to pay the cost? Write explicit checks upfront (slower development, clearer debugging) or deal with implicit assumptions when they break (faster development, cryptic failures)? Neither reduces the total assumptions in your system. You're choosing whether to document them in code or discover them during debugging.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why These Patterns Matter
&lt;/h2&gt;

&lt;p&gt;Once I understood these relocation patterns, how I approached design changed completely. When someone proposes "simplifying" the system, the first question should be: "Where does the complexity go?" It doesn't disappear. It moves. The proposal might still be good: maybe the new location is better. But recognize it's a trade, not an elimination.&lt;/p&gt;

&lt;p&gt;This doesn't mean simplification is impossible. You can absolutely reduce total complexity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delete dead code:&lt;/strong&gt; If code contributes nothing to requirements (truly dead), removing it eliminates complexity. No relocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use better abstractions:&lt;/strong&gt; Replace 50 lines of manual logic with 1-line library call. The library maintains complexity, but amortized across thousands of users, your system's complexity drops.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove accidental complexity:&lt;/strong&gt; Decouple unnecessarily entangled components. Clean up tech debt. Simplify overly complex solutions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The key:&lt;/strong&gt; These eliminate accidental complexity. Essential complexity (what the problem inherently requires) is what relocates, not eliminates.&lt;/p&gt;

&lt;p&gt;Common examples: "Let's use microservices to simplify development." Where does complexity go? From code organization to service coordination. You trade monolith complexity for distributed system complexity. "Let's add caching to speed things up." Where does complexity go? From query performance to cache management. You trade slow queries for invalidation logic. "Let's make the API more flexible." Where does complexity go? From API code to API consumers. You trade server complexity for client complexity.&lt;/p&gt;

&lt;p&gt;These might all be good decisions. But they're trades, not improvements in absolute terms. Microservices might be the right trade if you have the team size and tooling to handle distributed systems. Caching might be right if query performance is your bottleneck and you can handle invalidation. Flexible APIs might be right if you have sophisticated clients and want to iterate server-side less often.&lt;/p&gt;

&lt;p&gt;The key is naming what's being relocated and choosing where you want it to live. Before changing anything, identify the relocating quantity: Is this complexity? Where will it move? Is this knowledge? Where will it concentrate? Is this a decision? Where will it happen instead?&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Work With These Patterns
&lt;/h2&gt;

&lt;p&gt;Where should complexity live? Where will it hurt least?&lt;/p&gt;

&lt;p&gt;Example: API design. You can have a complex API with simple client code, or a simple API with complex client code. Neither eliminates complexity: they distribute it differently. Complex API means server handles edge cases, versioning, validation. Clients just call simple methods. Simple API means server provides primitive operations. Clients compose them to handle edge cases. &lt;/p&gt;

&lt;p&gt;I've worked with APIs that do everything (clients love it, server team drowns) and APIs that provide primitives (clients write boilerplate but have control). Same complexity, different distribution.&lt;/p&gt;

&lt;p&gt;The complexity is conserved. Where should it live? If you have many clients, push complexity to the API: pay the cost once, save it N times. If you have few clients and a rapidly changing server, simple API with complex client code might work fine.&lt;/p&gt;

&lt;p&gt;Choose your trades consciously. You can't eliminate conserved quantities. But you can choose better locations. Moving complexity from the hot path to the cold path is usually good: cache invalidation runs less often than queries. Moving complexity from novices to experts is often good: let experienced developers handle the abstraction so junior developers use a simpler interface. Moving complexity from many places to one place is often good: centralize knowledge even if that one place becomes more complex.&lt;/p&gt;

&lt;p&gt;But measure both sides. When you move complexity, measure both the source and destination. Code complexity decreased 40%, configuration complexity increased 60%, net result is +20% total complexity. If you only measure one side, you'll think you eliminated complexity. You didn't: you relocated it, and it grew. Measure what you gained and what you paid.&lt;/p&gt;

&lt;p&gt;Accept that some things don't simplify. If you keep trying to simplify something and complexity keeps showing up elsewhere, maybe the system has inherent complexity. Some problems are just complex. No architectural cleverness eliminates their complexity. You can only distribute it more or less well. Recognizing irreducible complexity lets you stop fighting it and start managing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Lasts
&lt;/h2&gt;

&lt;p&gt;But step back from the code for a moment. If everything eventually gets rewritten or deleted, what's the point of these choices?&lt;/p&gt;

&lt;p&gt;The answer: some things outlast the code. Patterns last. Design patterns outlive implementations. Separation of concerns, dependency injection, event-driven architecture: these patterns transfer across rewrites. The specific code gets replaced but the patterns persist. When you're choosing where complexity lives, you're really choosing patterns. Those patterns will outlast the code.&lt;/p&gt;

&lt;p&gt;Understanding lasts. Understanding the domain outlives the code. How the business works, what users need, why systems interact: this knowledge compounds over time. The code gets rewritten but understanding remains. When you're deciding where knowledge should live, invest in shared understanding. Documentation rots but team knowledge grows.&lt;/p&gt;

&lt;p&gt;Tests as specification last. Tests document expected behavior. They outlive implementations. When you rewrite, tests preserve requirements while code changes. The investment in test quality pays off when refactoring or replacing code. Tests preserve intent: what should this system do?&lt;/p&gt;

&lt;p&gt;Team culture lasts. How your team writes, reviews, and maintains code outlasts any particular codebase. Quality standards, review practices, testing discipline: these transfer to the next system. When you're working with these relocation patterns, you're building patterns of thinking that persist beyond the current code. Invest in culture. It compounds.&lt;/p&gt;

&lt;p&gt;The liberation comes from seeing these patterns. Once you understand that complexity relocates rather than disappears, you stop looking for solutions that eliminate it. You look for solutions that put complexity where it belongs. You measure both sides of the trade. You name what's being relocated and choose where it lives. And you invest in what actually lasts: patterns, understanding, and culture. While accepting that code is temporary.&lt;/p&gt;

&lt;p&gt;These relocation patterns aren't limitations. They're reality. You can't violate them. But you can work with them. And working with them is better than pretending they don't exist.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Originally published on ITNEXT: &lt;a href="https://itnext.io/complexity-cant-be-eliminated-it-can-only-be-moved-d122f7952715" rel="noopener noreferrer"&gt;https://itnext.io/complexity-cant-be-eliminated-it-can-only-be-moved-d122f7952715&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>architecture</category>
      <category>complexity</category>
      <category>systemsthinking</category>
      <category>software</category>
    </item>
  </channel>
</rss>
