DEV Community

Tyson Cung
Tyson Cung

Posted on

AI Coding Tools Are Making Developers Worse — And the Research Proves It

A Reddit thread on r/ExperiencedDevs blew up recently. A developer confessed they'd become "a tourist in their own codebase" after months of leaning on AI coding tools. They described a "hollow betrayal of the craft." Hundreds of experienced engineers piled on, sharing eerily similar stories.

I spent a week digging into this — the research, the Reddit confessions, the actual data. Here's what I found.

The Confession That Started It All

The original poster described a creeping dread: they'd stopped understanding the code they were shipping. AI wrote it, AI explained it, and when something broke, they froze. One commenter with 118 upvotes nailed it: "Speed was never the bottleneck. The understanding was."

That line stuck with me. We've been optimising for velocity — PRs merged, tickets closed, lines shipped — while the actual bottleneck in software engineering has always been comprehension.

The Research Backs It Up

This isn't just vibes. Two major studies dropped recently that confirm what these developers felt intuitively.

The METR Study (July 2025) ran a randomised controlled trial with experienced open-source developers working in their own repositories. The result? Developers using AI tools took 19% longer to complete tasks. Not faster. Slower. The researchers found AI reduced active coding and debugging time, but developers burned that saved time on prompt engineering, reviewing AI output, and fixing hallucinated code. The overhead ate the gains.

Anthropic's own research (January 2026) — yes, the company behind Claude — published findings showing AI use "impairs conceptual understanding, code reading, and debugging abilities without delivering significant efficiency gains on average." When the company building the tool publishes research saying the tool hurts learning, you should probably pay attention.

Then there's Andrej Karpathy, former head of AI at Tesla, who said publicly: "I am starting to atrophy my ability to write code manually." If it's happening to Karpathy, it's happening to you.

The Operational Debt Nobody Talks About

Here's what scares me more than individual skill loss: operational debt.

When one developer builds a feature entirely through AI prompts, that code becomes a black box the moment they leave. The next person inherits a codebase that nobody understands — not the person who wrote it (they prompted it), not the AI (it has no persistent memory), and definitely not the new hire trying to debug it at 2 AM.

One Redditor shared a story about a colleague who built an entire Three.js animation system through ChatGPT. Beautiful output. Then the colleague left. Nobody could maintain it. The whole thing had to be rewritten from scratch.

That's not a productivity tool. That's a time bomb with a nice UI.

Why Our Brains Fall For It

A commenter named Stellariser dropped the most technically precise take in the whole thread: "Everything an LLM generates is a hallucination. It's interpolating, not extrapolating."

This matters because developers start treating AI output as authoritative when it's actually statistical pattern matching. The AI generates plausible-looking code, you accept it because it runs, and your brain files it under "understood" when you actually just skimmed it. Over weeks and months, your mental model of the codebase degrades. You're not learning — you're approving.

The Junior Dev Trap

A junior developer named byshow posted something that genuinely bummed me out: "It's lose-lose for me. Don't use AI and I'm behind on metrics. Use AI and I never build real confidence."

A manager responded: "I'm ready to ban AI for the junior I manage. It ensures they'll never grow beyond junior level."

This is the part nobody's solving well. Juniors need the struggle. The frustration of debugging, the satisfaction of finally understanding why something works — that's how you build engineering intuition. Skip that phase and you get someone who can prompt but can't think.

So What Actually Works?

The original poster proposed what they called the "Junior Intern Rule": treat AI like a junior intern. Let it handle the grunt work — boilerplate, test scaffolding, regex patterns. But never let it touch core logic. And manually refactor every line it produces so you actually understand what shipped.

I think that's a solid starting framework. Here's what I'd add:

  • Read before you accept. Every AI suggestion. Line by line. If you can't explain what it does, reject it.
  • Debug manually first. Give yourself 15 minutes before reaching for AI help. That struggle is where learning happens.
  • Write the architecture yourself. AI can fill in implementation details, but the system design — the "why" behind the code — needs to come from your brain.
  • Rotate AI-free days. Like training wheels, sometimes you take them off to see if you can still ride.

AI coding tools aren't going away. The question is whether you're using them as a power tool or a crutch. Right now, the evidence says most of us are leaning toward crutch.

The developers who'll thrive in 2026 and beyond aren't the ones who prompt the fastest. They're the ones who still understand what they shipped.

Top comments (0)