Hello, I'm Tuan.
I've been doing technical interviews for backend roles recently. The pattern across the last few rounds genuinely scared me — and I don't think we, as an industry, are talking about it honestly.
Most of the candidates I sat with could not debug.
Not "struggled to debug." Not "took longer than I expected." Could not, in any meaningful sense, form a hypothesis about a broken system and test it. The moment something didn't work, the same reflex kicked in: paste the error into Cursor, paste the file, wait, paste again.
I'm not writing this to gatekeep. I'm writing this because something is breaking in our profession, quietly, and the seniors who notice it are mostly staying quiet because saying it out loud sounds like an old man yelling at clouds.
Fine. I'll be the old man.
The interview that made me start writing this
Strong resume. A few years of experience. Confident on system design, fluent on Redis caching, queue patterns, the usual.
I gave him a small task: a Node.js service throwing 500s on a specific input. Logs included. He could use any tool, including AI.
He opened Cursor and pasted the stack trace. Then the source file. Then, when the suggestion didn't fix it, the new error. Then the next error.
Forty minutes in, he had four open tabs of AI suggestions and no working theory of what was actually wrong.
The bug, when I finally walked him through it, was a hallucinated method. Three weeks earlier, his AI assistant had suggested .findUserByEmailOrThrow() — a method that does not exist in our ORM. The code shipped because the test mocked the entire data layer and returned truthy. In production, the call resolved to undefined, the next line dereferenced it, and the service crashed on a specific edge case nobody had hit yet.
He had accepted that line without reading it. He could not have caught it, because he had no model of what code his ORM actually exposed.
I'm not telling this story to mock him. He was smart, articulate, and probably understood distributed systems better than I did at his age.
He just had no internal model of how the code in front of him actually executed.
This wasn't an outlier
I assumed it was. It wasn't.
Across the recent interviews, the same patterns kept showing up:
- Most candidates reached for an AI tool within the first minute of seeing an error, before forming a single hypothesis of their own.
- A majority could not walk me through what their own committed code did, line by line, when asked.
- Several had shipped code containing functions or fields that don't exist in the libraries they claimed to use — hallucinations that survived because tests mocked around them.
- A non-trivial number told me, unprompted, that they "don't really debug anymore."
These were not bootcamp graduates. These were mid-level engineers from companies you've heard of.
And it isn't just my interview room. METR's 2025 randomized study on experienced open-source developers found something nobody wants to repeat at standup: developers using AI tools felt 20% faster, and were measured 19% slower. GitClear's 2024 analysis of millions of commits found that the rate of "churned code" — lines added and removed within two weeks — has roughly doubled since Copilot went mainstream. We are shipping more code that we then immediately rip back out.
Something has shifted. And the people best positioned to call it out — senior engineers — are mostly not, because the same tool is making them faster, and it's hard to criticize what's working for you.
What vibe coding actually does to your brain
Here's the part nobody on Twitter wants to admit.
When you debug a problem yourself — really debug it, with print statements and bad guesses and dead ends — you're not just fixing a bug. You're building a mental model of the system. Every wrong hypothesis you eliminate teaches you something about how this code behaves under pressure.
That model is the entire job.
This isn't folk wisdom. Cognitive science has had a name for it for forty years: desirable difficulty. Robert Bjork's research showed that learning sticks in proportion to the friction you experience while doing the task — not in spite of it. Make the practice too easy and the skill never consolidates. The struggle isn't a tax on learning. It is the learning.
Writing code is the easy part. AI is great at the easy part. But the model — the intuition for where the bug probably is, the smell that says this looks fine but it isn't — only forms when you struggle.
When you outsource the struggle, you outsource the model.
AI doesn't make you a worse developer. It makes you a developer who never becomes a better one.
It's like learning chess by only playing with the engine's evaluation bar visible. You'll know which moves are good. You will never know why. The day the bar disappears, you are not a chess player. You are someone who used to be near a chess engine.
The strongest counters, addressed honestly
There are two arguments against everything I just wrote that I take seriously. I want to deal with both, because if I don't, the comments will — and probably less charitably.
Counter 1: "Juniors will learn faster, not slower. AI lets them ship more, hit more edge cases, see more of the system."
I want to believe it. I genuinely do.
I'm just not seeing it in the people walking into the interview room, and I'm not seeing it in the data either. Volume of code shipped is not the same as depth of understanding, and "shipping more" only teaches you something if you have the model to interpret what you shipped. Without the model, more output is just more noise — which is exactly what GitClear's churn numbers look like.
Maybe I'm wrong. Maybe in three years the data flips and juniors reach senior-level intuition faster than ever. I'd love to write the apology post.
Counter 2: "Calculators didn't kill math. Compilers didn't kill assembly devs. IDEs didn't kill C programmers. Every abstraction triggers this panic and the industry adapts. You're being a boomer."
This is the argument I respect most, and it's also the one that's wrong in a specific, important way.
Every previous abstraction in software was deterministic. A calculator that returns the wrong answer for 2 + 2 gets recalled. A compiler that miscompiles is a P0 bug that ships a patch the same week. An IDE that auto-completes a method that doesn't exist is broken software. The contract was: the abstraction is correct; you can trust the output and focus on the layer above.
AI is the first abstraction in our field whose output can be confidently, plausibly wrong, by design, and we ship it anyway. There is no recall. There is no patch. The "bug" is the entire premise.
That changes what skill is required of the user. With a compiler, you didn't need to verify its output line by line — that would defeat the point. With AI, verifying the output line by line is the only thing keeping you from shipping nonsense. And verification requires the exact skill — reading code, building a model, smelling wrongness — that vibe coding skips.
Calculators didn't require a numeracy detector. AI requires a bullshit detector. You can only build that detector by being wrong, alone, many times, before you ever touch the tool.
This is not the same transition. It looks the same from the outside. Underneath, the contract has flipped.
The skill that's quietly disappearing
The skill is not "writing code." The skill is forming a hypothesis about a system you don't fully understand and testing it cheaply.
This is what senior engineers do all day. It's why we get paid. It's also the thing AI cannot teach you, because you can only learn it by being wrong in public, repeatedly, with consequences.
Every time a junior pastes an error into Claude instead of reading it, a small skill atrophies. Compound that across two years of "productivity" and you get someone who can ship a CRUD app from scratch but freezes when production breaks at 2 AM.
Stack Overflow, by the way, was not the same thing. It forced you to read an answer that was usually for a slightly different problem, then adapt it. That adaptation step — "this isn't quite my situation, but if I change X..." — was the learning.
AI removes the adaptation step. It gives you something that looks like it should work, you accept it, you ship it, you learn nothing.
Stack Overflow was a textbook. AI is the answer key.
What I tell juniors now
I've stopped saying "don't use AI." That's both unrealistic and condescending.
What I tell them instead:
- Form your own guess before reaching for the tool. Even a wrong guess. Especially a wrong guess. The wrong guess is where the model gets built.
- Before you accept any AI suggestion, explain to yourself why it works. Out loud, if you have to. If you can't, don't paste it.
- Once a week, fix something the hard way. Pick a bug. Solve it without AI. It will feel slow. That's the point.
This isn't anti-AI. It's anti-atrophy.
The goal is not to never use the tool. The goal is to make sure that when the tool is wrong — and it will be wrong, on the day production is on fire and the stakes are real — you are not standing in front of the wreckage with no idea what to do.
The uncomfortable prediction
In five years, debugging legacy systems will be the highest-paid niche in software, because almost nobody under thirty will be able to do it.
I am not joking and I am not exaggerating. The systems we are building today will still exist. The bugs in them will still exist. Someone will have to walk into a 200,000-line codebase, follow execution by hand, and figure out why a specific request is timing out on Tuesdays.
That person will charge a fortune. There will not be many of them.
If you are early in your career, this is genuinely good news — but only if you choose discomfort now.
The friction of debugging without AI is not a bug. It is the skill being installed.
Skip it now and the bill comes due later. It always does.
If you're a junior reading this and you disagree — I genuinely want to hear why. Tell me what I'm getting wrong, in the comments. I'd rather be argued out of this than be right about it.
I write about backend, production incidents, and the unglamorous parts of being a software engineer. Follow if that's your kind of thing.
Top comments (0)