DEV Community

Cover image for Refactoring for AI: When Your Code Reviewer is a Machine
Yuto Takashi
Yuto Takashi

Posted on

Refactoring for AI: When Your Code Reviewer is a Machine

Why You Should Care

If you're coding with AI assistants (ChatGPT, Claude, Copilot), you've probably noticed something weird: the rules for "good code" are changing.

Traditional refactoring advice assumed humans would read your code. But what if AI reads it more often than humans do? What if the AI gets confused by your "perfectly readable" code?

This is happening right now, and we need to talk about it.

The New Problem: Understanding Debt

We all know about technical debt – code we'll have to fix later. But AI-native development creates a different problem: understanding debt.

Here's the difference:

Technical Debt Understanding Debt
"This will be hard to change later" "Nobody knows why this works now"
Future maintenance cost Immediate comprehension cost
Can be paid back gradually Blocks you right now

What causes it?

When AI generates code:

  • You don't understand it – it works, so you ship it
  • No consistency – different patterns every time
  • Over-complicated – AI adds edge cases you didn't ask for

The cost of writing code went down. The cost of understanding code went up.

But Wait... Do We Even Need to Understand It?

Here's where it gets interesting.

These days, I don't actually read code much anymore. When I need to understand something:

  1. Ask AI: "What does this function do?"
  2. Ask AI: "Why was it designed this way?"
  3. Ask AI: "What happens if I change X?"

If AI can explain code better than humans can read it, is "human readability" still the goal?

Plot twist: Sometimes yes, sometimes no. Let me show you where AI fails.

When AI Gets Stuck: The Debug Loop of Doom

You've probably experienced this:

You: "This function has a bug, can you fix it?"

AI: *adds console.log()*
AI: *adds another console.log()*
AI: *adds error handling that doesn't help*
AI: *adds more logs in random places*
AI: *suggests rewriting the whole thing*

You: 😤
Enter fullscreen mode Exit fullscreen mode

AI is bad at debugging because:

  • No memory – forgets what it already tried
  • No hypothesis – just throws solutions at the wall
  • No quit point – keeps trying forever

The lesson: AI can generate and explain code well, but it can't investigate problems well.

The New Refactoring Goal: Make AI Not Get Lost

Traditional refactoring optimized for human brains:

  • Short variable names → clear names
  • Long functions → small functions
  • Complex logic → simple logic

New refactoring optimizes for AI accuracy:

  • Small scope – AI loses track in big files
  • Clear dependencies – AI can't handle implicit coupling
  • Less state – AI can't track global mutations
  • More tests – AI needs validation checkpoints

Interesting fact: These overlap a lot! "Good code for humans" and "good code for AI" aren't that different... yet.

Where Humans and AI Disagree

Function Size

Humans prefer:

// I want to see the whole story in one place
function processUser(user) {
  // validate
  // transform
  // save
  // notify
  // all in one flow
}
Enter fullscreen mode Exit fullscreen mode

AI prefers:

// I can jump between functions instantly
function processUser(user) {
  const validated = validate(user);
  const transformed = transform(validated);
  const saved = save(transformed);
  notify(saved);
}
Enter fullscreen mode Exit fullscreen mode

For humans, jumping between files breaks mental flow.

For AI, it costs nothing.

The Practical Answer

Right now? Optimize for AI.

Why?

  • Humans can ask AI to explain the flow
  • AI can't ask humans to restructure for better parsing
  • AI's limitations are more constraining

Practical Tips: Stop the Debug Loop

1. Narrow the scope

❌ "Fix the bug in this file"
✅ "Check if validateEmail() correctly handles subdomains"
Enter fullscreen mode Exit fullscreen mode

2. You make the hypothesis, AI tests it

❌ "Why is this broken?"
✅ "I think the issue is timezone handling. Check lines 45-60"
Enter fullscreen mode Exit fullscreen mode

3. Three strikes rule

If AI tries the same approach 3 times, stop and rethink:

  • Reset the conversation
  • Try a different AI
  • Debug it yourself

4. Separate branches for AI experiments

# Don't let AI pollute your main branch
git checkout -b ai-debug-session

# Let it try stuff
# If it works, cherry-pick the good parts
# If not, delete the branch
Enter fullscreen mode Exit fullscreen mode

5. Always generate tests with features

❌ "Build a login system"
✅ "Build a login system with unit tests"
Enter fullscreen mode Exit fullscreen mode

When to Refactor

Red flags that you need to refactor:

  • AI gets confused by the same code 3+ times
  • You can't explain what a function does
  • Adding features requires touching 5+ files
  • Tests are flaky or missing

Green lights to refactor:

  • Between sprints
  • Before adding major features
  • When you have dedicated time (not Friday afternoon)

Quick wins:

  1. Split big functions (>50 lines)
  2. Remove global state
  3. Add tests to untested code
  4. Extract magic numbers to constants

Do one per day. Don't try to refactor everything at once.

The Unanswered Questions

Honestly? I don't have all the answers.

  • Will AI's preferences change with new models?
  • Should we really deprioritize human readability?
  • What if AI learns to handle complexity better?

What I do know:

  • The question "Who is this code for?" is now real
  • AI's debugging limitations are the current bottleneck
  • Optimizing for "AI won't get lost" is a useful heuristic

Try This Today

Pick one function that's been giving AI trouble:

  1. Break it into smaller pieces (one responsibility each)
  2. Add tests
  3. Ask AI to debug something in that area
  4. See if it performs better

Then tell me in the comments – did it work?


Discussion

What's your experience?

  • Do you refactor differently when using AI?
  • Have you found other patterns that help/hurt AI understanding?
  • Am I overthinking this? 😅

Drop your thoughts below. I'm still figuring this out, and I'd love to hear what's working (or not) for you.


I write more about these kinds of thought processes and engineering decisions on my blog.

If that sounds interesting: https://tielec.blog/

Top comments (0)