DEV Community

vibecodiq
vibecodiq

Posted on • Originally published at vibecodiq.com

Comprehension Debt — When Your AI-Generated Codebase Outgrows Your Understanding

 "I'm running a SaaS on code I don't fully understand."

A founder told us this during our first call. Not ashamed. Just honest. The product had 80 paying users, revenue was growing, and the AI had built most of the codebase across 70+ sessions.

But when a customer reported a billing error, nobody could trace where pricing logic actually lived.

This post explains what comprehension debt is, how to measure it, and how to fix it structurally.

What Is Comprehension Debt

Comprehension debt is what happens when code is generated faster than it can be understood.

AI produces working software. But "working" and "understood" are not the same thing. Each AI session solves the immediate task without asking whether the solution is structurally coherent with what came before.

By month 4, the codebase has implicit conventions that nobody explicitly chose:

  • Naming patterns that shifted mid-project
  • Business logic in unexpected locations
  • Functions that do three things
  • Files that nobody can summarize in one sentence
GENERATION SPEED vs COMPREHENSION SPEED

  Session 1   ████░░░░░░░░  ████░░░░░░░░
  Session 15  ████████░░░░  █████░░░░░░░
  Session 30  ████████████  ██████░░░░░░
  Session 50  ████████████  ██████░░░░░░
              ████████████

              gap = comprehension debt
Enter fullscreen mode Exit fullscreen mode

How to Measure Comprehension Debt

Test 1: The Explanation Test

Pick any file over 300 lines in your project. Can you explain what it does — without reading the code line by line?

If not, you have comprehension debt.

Test 2: The Three Questions Test

find src -name "*.ts" -o -name "*.tsx" | xargs wc -l | sort -rn | head -20
Enter fullscreen mode Exit fullscreen mode

For your top 10 files, ask:

  1. What domain does this file belong to? (auth, payments, users, etc.)
  2. What happens if I delete it? (what breaks, what depends on it)
  3. Who is responsible for it? (which team member, which module)

If you can't answer all three for most files, the codebase has outgrown your ability to reason about it.

Test 3: The TODO Audit

grep -rn "TODO\|FIXME\|HACK\|XXX" src/ | wc -l
Enter fullscreen mode Exit fullscreen mode

A high number means the AI left breadcrumbs it never cleaned up. Each one is a comprehension gap — a place where the AI deferred a decision that nobody came back to.

Test 4: The Convention Drift Check

# Check for inconsistent naming patterns
grep -rn "export default function\|export function\|export const" src/ --include="*.ts" --include="*.tsx" | head -30
Enter fullscreen mode Exit fullscreen mode

Look at the first 30 exports. Are they consistent? default function vs named export vs const arrow function? If you see all three patterns, the AI shifted conventions mid-project.

Why Documentation Doesn't Fix This

"We'll add documentation later" is the default response. It's also the wrong fix applied at the wrong layer.

Documentation describes what the code does. It does not change what the code does.

If the architecture is incoherent — business logic in three places, naming conventions that shift mid-project, files that mix four domains — documentation just describes the mess. It's the equivalent of labeling every room in a building with no floor plan.

Worse: the AI that wrote the code will generate plausible-sounding documentation. But "plausible" and "accurate" diverge when the architecture has drifted.

The Structural Fix

Comprehension debt is not a knowledge problem. It's a structure problem. The fix sequence:

Step 1: Domain Mapping

List every business domain in your application (auth, users, payments, notifications, etc.). Then map which files contain logic for which domain.

# Find all files that reference "price" or "payment"
grep -rln "price\|payment\|stripe\|checkout" src/ --include="*.ts" --include="*.tsx"
Enter fullscreen mode Exit fullscreen mode

If pricing logic lives in more than 2-3 files, it's scattered.

Step 2: Module Boundaries

Create explicit module boundaries:

src/
  modules/
    auth/          -- everything authentication
    payments/      -- everything pricing and billing
    users/         -- everything user management
    notifications/ -- everything notifications
  shared/
    types/         -- shared type definitions
    utils/         -- pure utility functions
Enter fullscreen mode Exit fullscreen mode

The rule: each module owns one domain. No file crosses domain boundaries.

Step 3: Self-Documenting Naming

Enforce consistent naming conventions:

  • Files named by what they do, not how they're implemented
  • Functions named by their business purpose
  • No abbreviations that require domain knowledge to decode

Step 4: Convention Enforcement

Add linting rules that enforce:

  • Consistent export patterns
  • File size limits (flag files over 300 lines)
  • Import boundaries (modules can only import from allowed modules)
// Example: eslint boundary rule
{
  "rules": {
    "import/no-restricted-paths": [{
      "zones": [{
        "target": "./src/modules/auth",
        "from": "./src/modules/payments"
      }]
    }]
  }
}
Enter fullscreen mode Exit fullscreen mode

The Result

After structural reorganization, the codebase becomes self-evident. You can navigate by structure instead of by searching. A new team member can understand the architecture from the folder structure alone. The explanation test becomes trivial — because each file does one thing in one domain.

Comprehension debt doesn't require more documentation. It requires more structure.


This is part of the AI Chaos series — a structural analysis of failure patterns in AI-generated codebases. Based on ASA (Atomic Slice Architecture) — an open architecture standard for AI-generated software.

Resources

  • ASA Standard — the open specification
  • GitHub — source, examples, documentation
  • Vibecodiq — structural diagnostics for AI-built apps

Top comments (0)