DEV Community

brian austin
brian austin

Posted on

I asked Claude to review my code for 30 days straight. Here's what actually happened to my skills.

I asked Claude to review my code for 30 days straight. Here's what actually happened to my skills.

Everyone's debating whether AI makes developers better or worse. I ran a personal experiment to find out.

30 days. Every piece of code I wrote went through Claude for review before I committed it.

Here's the honest result.


The setup

I'm a mid-level backend developer. Node.js, some Python, occasional Go. I've been coding professionally for 6 years.

The rules I set:

  1. Write the code myself first — no AI generation
  2. Submit to Claude for review
  3. Read every suggestion, implement what made sense
  4. Journal what I learned vs. what I just copy-pasted

I tracked two things: did I understand the change? and would I have caught this myself in 3 months?


Week 1: The "oh wow" phase

Day 3: Claude caught a subtle race condition in a Node.js async function I'd written. I'd been writing similar patterns for years.

// My original code
async function processQueue(items) {
  const results = [];
  items.forEach(async (item) => {  // BUG: forEach doesn't await
    const result = await processItem(item);
    results.push(result);
  });
  return results;  // Returns empty array
}

// Claude's suggestion
async function processQueue(items) {
  const results = await Promise.all(
    items.map(item => processItem(item))
  );
  return results;
}
Enter fullscreen mode Exit fullscreen mode

I knew about async/await. I didn't have this specific pattern internalized. After Claude flagged it, I started seeing this mistake everywhere in legacy codebases.

Did I learn it? Yes. I've never made that mistake again.


Week 2: The "pattern recognition" shift

Something changed. I started anticipating Claude's feedback before submitting.

I'd write code, then think: Claude's going to say this error handling is too broad. Let me fix it first.

This is the mechanism the "AI should elevate your thinking" crowd is talking about. The review process was changing my internal model — not replacing it.

# Before (my instinct)
try:
    result = external_api_call()
except Exception as e:
    log.error(e)
    return None

# After (internalized from Claude reviews)
try:
    result = external_api_call()
except requests.Timeout as e:
    log.warning(f"API timeout, retrying: {e}")
    return retry_with_backoff(external_api_call)
except requests.ConnectionError as e:
    log.error(f"Connection failed: {e}")
    raise ServiceUnavailableError()
except Exception as e:
    log.error(f"Unexpected error: {e}")
    raise
Enter fullscreen mode Exit fullscreen mode

I'd seen this pattern in docs. Claude repeating it 12 times made it mine.


Week 3: The dangerous week

I got lazy.

Day 17: I submitted a function without really thinking. Claude suggested a refactor. I implemented it without reading why.

Day 18: Same.

Day 19: I had a bug. The kind Claude had already trained me to avoid. But because I'd been copy-pasting suggestions, I hadn't actually learned — I'd just been executing.

This is the 19% slower finding in real life. When you use AI as an execution tool instead of a learning tool, you lose the cognitive reps. The skill atrophies.

The difference: Using AI to check your thinking builds skills. Using AI to replace your thinking erodes them.


Week 4: The calibration

I changed the rule: I had to explain each suggestion in my journal before implementing it.

Five words minimum. "Avoids broad exception swallowing." That's enough. Just enough friction to force understanding.

Views count doubled. Bug rate dropped.


The actual numbers after 30 days

  • Code review time: Down ~40% (I was pre-filtering obvious issues)
  • Bugs caught in PR: Down ~60% (I was writing better code to start)
  • Skills I learned: 11 distinct patterns I can now write without AI
  • Skills I outsourced: 3 things I now can't do without AI (mostly regex edge cases — no regrets)
  • Week 3 regression: Real. Lazy AI use is worse than no AI use.

The uncomfortable conclusion

The "AI makes developers worse" crowd and the "AI makes developers better" crowd are both right — about different usage patterns.

The variable they're not measuring: do you understand what the AI told you?

If yes: you compound faster than any developer in history.
If no: you're a prompt-runner who's slowly losing the skills that got you the job.


What this costs

I ran this experiment using SimplyLouie — Claude API access for $2/month.

Not because I'm pitching you. Because when I ran the numbers, the $20/month ChatGPT subscription felt like a lot to spend on a tool I might use lazily. At $2/month, the stakes are low enough that I actually experimented instead of optimizing for ROI on each prompt.

Sometimes cheaper means more honest.


Discussion

Have you tracked your own skill trajectory with AI code review? Did you notice the Week 3 effect — where lazy usage actually set you back?

Curious whether this maps to other people's experience or if I'm an outlier.

Tagged: #discuss #ai #webdev #programming

Top comments (0)