DEV Community

Theodor Heiselberg
Theodor Heiselberg

Posted on

AI, Cognitive Debt, and Ownership

This post is inspired by the reflections shared in this talk:
The Dark Side of AI Code Generation

Lately at work we are, like the rest of the world, embracing AI and incorporating it into our daily lives and workflows.

But uncritical usage of AI has a dark side!


The dark side

  1. When Using AI, You Risk Building Knowledge Gaps

  2. AI-Generated Code Can Train You to Accept Complexity You Don’t Understand

AI increases output. That is not the same as increasing understanding.

If we repeatedly merge code we did not fully reason about, we accumulate something more dangerous than technical debt:

We accumulate cognitive debt.


What Is Cognitive Debt?

Cognitive Debt is the interest you pay in time and stress when you have to re-learn your own system through code you never truly understood.

It compounds.

You pay the interest when:

  • Debugging production incidents
  • Refactoring fragile areas
  • Onboarding new developers
  • Explaining design decisions you cannot fully justify

AI is not the problem.

Unowned complexity is.


Shared Theory: The Real Asset

Every healthy engineering team shares a theory of the system:

  • How it works
  • Why decisions were made
  • Where the boundaries are
  • What must never break

This shared understanding is what allows teams to move fast safely.

If AI accelerates code production without strengthening shared theory, you are trading short-term speed for long-term fragility.


AI-PR Review Framework

AI should be used to increase shared understanding — not reduce it.

If you do not understand the code, you do not own it.

1. Review Like a Junior Developer

Read the PR as if you are new to the codebase.

Ask yourself:

  • Can I explain every single line to a colleague?
  • Do I understand why this approach was chosen?
  • Could I rewrite this without using AI?

If the answer to any of these is no, you are not done reviewing.

Read deeper.

Refactor it.

Simplify it.

The standard is this:

“I feel confident debugging this in production at 3 AM.”

If that sentence feels false, the code is not ready.


2. The PR Owner Must Document Decisions

The person who prompted or introduced AI-generated code must update a decision record (Confluence, ADR, etc.) with:

  • What the code solves
  • Why this design was chosen
  • Known edge cases
  • Trade-offs considered

This forces reasoning before merging.


3. Rotate Ownership of Critical Areas

Make sure everyone on the team can describe the system’s most critical components:

  1. Create a list of critical areas
  2. Rotate code walkthrough responsibility
  3. Require reviewers to update the decision record during review

Examples of critical areas:

  • Authentication
  • Payments
  • Application pipeline (Program.cs, Startup.cs, middleware flow)
  • Infrastructure boundaries

If only one person understands a critical system, you are already in debt.


Guardrails for AI Usage

Practical constraints reduce risk:

  • Keep AI-assisted PRs small
  • Require tests before merge
  • Prefer AI for scaffolding and refactoring, not architectural decisions
  • Enforce clear architectural boundaries
  • Reject complexity you cannot explain

AI should increase clarity.

If it increases opacity, something is wrong.


The Debugging Confidence Standard

Speed without comprehension is fragile.

Before merging AI-generated code, ask:

Would I confidently debug this without asking AI to explain it back to me?

If the answer is no, the cost has only been postponed.

And postponed cost compounds.


Conclusion

AI is a powerful amplifier.

It can amplify:

  • Understanding
  • Discipline
  • Shared ownership

Or it can amplify:

  • Complexity
  • Fragility
  • Dependency

The difference is not in the tool.

It is in whether we refuse to merge what we do not understand.

Top comments (0)