How AI quietly broke two things on our .NET team at the same time. In opposite directions. And nobody noticed.
TL;DR
Juniors ship fast and cannot debug anything. Seniors are slower to debug than they used to be. The AI writes immaculate code with the error handling of a sleep-deprived intern. And everyone is too busy to care until 2am.
2013 vs 2026
In 2013, joining a .NET team meant one thing.
You got stuck. You stayed stuck. You Googled the same NullReferenceException four times. You finally understood it. You never forgot.
In 2026, joining a .NET team means opening a chat window.
Which is faster. Genuinely. No sarcasm. The AI is great.
But somewhere between then and now, something got lost. And we did not notice until we really needed it.
The Juniors: They Never Had to Struggle
AI removed the productive suffering.
Now a junior hits a wall, asks the AI, gets a fix in ten seconds, ships. Great velocity. Zero scar tissue.
Ask them to trace a value through .NET middleware they did not write. Ask them to explain why an async deadlock only happens under load. Ask them to open the debugger and step through something cold.
You can see the exact moment the confidence leaves their face.
"I have trained you well. Now debug this StackOverflowException in production without me."
-- The AI, logging off at exactly the wrong moment
The muscle never got built. Not their fault. Ours. We handed them a Ferrari before they knew how to drive.
The Seniors: They Traded It In
Before AI: Senior hits a weird EF Core bug. Reads the stack trace. Holds the whole call chain in their head. Finds it in twelve minutes.
After eight months of AI: Senior pastes the stack trace. AI says check the DbContext lifetime. Senior checks. Fixed in four minutes.
Still fast. But notice what did not happen.
The senior did not debug. The senior supervised debugging. That is a different skill. And it atrophies quietly.
The reflex goes. You do not notice it leaving. You just notice one day that reading a raw stack trace feels harder than it used to.
Act Three: 2am. Production is down. AI is rate limited.
We will leave that one to your imagination.
The AI: Crime Scene, No Witnesses
catch (Exception ex)
{
_logger.LogError("An error occurred."); // groundbreaking stuff
throw;
}
AN. ERROR. OCCURRED.
Not which order. Not which customer. Not whether the charge went through before it exploded -- which is literally the only thing anyone needs to know at 2am.
Here is what a human writes on a Friday when they are scared:
catch (PaymentGatewayException ex) when (ex.IsTransient)
{
// Do NOT retry blindly -- charge may have gone through. Check idempotency key first.
_logger.LogError(ex, "Gateway failure for Order {OrderId}. Charged: {WasCharged}",
request.OrderId, ex.ChargeAttempted);
throw;
}
One gets debugged in twenty minutes. The other gets debugged while someone asks if the customer was charged twice.
Guess which one the AI writes.
Nobody Read The PR Either
Open diff. Looks like code we would write. Tests pass. LGTM.
// perfectly formatted. reviewed in 30 seconds. approved.
// contains a classic EF Core N+1 that will destroy the DB under any real load.
// but sure. LGTM.
return orders.Select(o => new OrderDto {
Items = _context.OrderItems
.Where(i => i.OrderId == o.Id).ToList() // new query. per order. every time. forever.
});
Junior shipped it. Senior approved it. AI has no idea what N+1 means emotionally. Database is about to have a very bad day.
And Then There Is The Pressure
The sprint does not stop. The tickets do not stop. The stand-up is in twenty minutes.
So the junior asks the AI because asking a senior feels like interrupting someone who is drowning. The senior approves the PR quickly because they have five more tickets. Nobody writes the why comment because the ticket closes today and tomorrow is already full.
Nobody is doing anything wrong. Everyone is just doing the only thing the system has time for.
It is not a skill problem. It is a system problem wearing a skill problem's coat.
"Move fast and break things"
-- Someone who never had to debug the things that got broken
The Only Checklist A Human Reviewer Needs In 2026
The linters catch the N+1. The analyzers catch the missing CancellationToken. Let the tools do the tools job.
Here is what only a human can check.
- [ ] Does this solve the RIGHT problem, not just the one in the ticket?
- [ ] Does this contradict a decision made in another service six months ago?
- [ ] Is this shortcut acceptable given what is coming next quarter?
- [ ] Will the on-call person understand what went wrong from the logs alone?
- [ ] Is this junior developing a habit we should address, not just fix?
Five questions. All of them require someone who was in the meeting, knows the history, or has been burned before.
The actual value of a human reviewer in 2026 is not spotting the N+1.
It is knowing we tried this exact pattern in 2023 and it took down prod.
Pitfalls To Avoid
Using AI to fix bugs that AI wrote. Junior hits a bug in AI code, asks the AI, moves on. The mental model never forms. Same bug, different form, three months later.
Trusting green tests as proof of correctness. AI tests the behaviour it implemented. If it implemented the wrong behaviour, the tests pass. Enthusiastically.
Letting the PR description review the code for you. AI writes great PR descriptions. So great that reviewers read the description and skim the diff. The description says what it does. The code is where it goes wrong.
Closing
The juniors are fast. The seniors are comfortable. The AI is confident.
Somewhere in the middle is the production incident nobody has the instincts to fix at speed anymore.
The lost art is not gone. It just needs a process around it -- not heroism and overtime.
Start with the two checklists. Ship them this sprint.
Drop a comment if you have hit this differently. Genuinely asking.



Top comments (0)