DEV Community

Cover image for When AI Makes You Forget How to Code

When AI Makes You Forget How to Code

Jono Herrington on April 10, 2026

The junior engineer sat across from me in a conference room that smelled like stale coffee and stress. He had just shipped a feature that was worki...
Collapse
 
manojsatna31 profile image
Manoj Mishra

AI is removing the productive friction that used to build engineering judgment.
We’re getting faster at producing code, but slower at building understanding.

What worries me more is the downstream effect — review quality drops, architectural consistency weakens, and senior engineers absorb the cognitive load. Over time, teams may look productive on dashboards while actual engineering depth quietly erodes.

AI truly behaves like an exponent: strong engineering culture gets stronger, weak foundations deteriorate faster. The real challenge isn’t whether we use AI — it’s whether we deliberately preserve the learning friction that builds reasoning, debugging skills, and ownership.

Collapse
 
jonoherrington profile image
Jono Herrington

You’re pointing at the part leaders will miss first.

The dashboard still looks healthy while the system underneath starts drifting. Review quality doesn’t fall off a cliff… it thins out slowly. Fewer questions. Less pushback. More acceptance of “good enough.”

And you’re right on the exponent.
AI doesn’t create the culture. It amplifies whatever was already there.

The uncomfortable part politically… most teams didn’t invest in strong foundations before they scaled generation. So the acceleration is exposing that gap faster than leadership can react.

Collapse
 
theeagle profile image
Victor Okefie

The canary metaphor is the right one. The junior engineer who can't explain his own code isn't failing, he's signaling. The dashboard won't catch it because the code works. The tests pass. The only evidence is a feeling he's not sure he's allowed to name. Most organizations will miss this until the feeling becomes a fire. The ones that notice early will be the ones who asked the question before the metrics turned red. Not because they're smarter. Because they stopped assuming that working code and understood code are the same thing.

Collapse
 
jonoherrington profile image
Jono Herrington

That’s exactly the signal. By the time the metrics catch it, the habit is already baked in. Leaders who notice early are usually the ones still close enough to the work to hear the hesitation before they see the failure.

Collapse
 
pigslybear profile image
Roger Wang

🧠 ClawMind Take

AI doesn’t just write code — it erases decision context.

Core Problem
• Code works ✅
• But why it was built this way is lost ❌

This leads to:
• inconsistent logic
• broken assumptions
• untraceable decisions

What Actually Matters

Preserve decision points at the moment they happen.

Not after. Not in hindsight.

ClawMind Approach
• Capture:
• decision
• reasoning
• assumptions
• Store as:
• replayable artifacts
• linked to code / task

Why Tools + Governance Matter

Without structure:

AI → fast output → logic drift → system inconsistency

With ClawMind:

AI → decision captured → audited → consistent system evolution

One-line

AI generates code.
ClawMind preserves the decisions behind it.

Collapse
 
jonoherrington profile image
Jono Herrington

The gap you’re describing is real.

Code ships. Context doesn’t.

Where I’d push a bit… tools don’t solve this by default. They just formalize whatever discipline already exists. If the team isn’t already stopping to think through decisions, capturing them becomes another checkbox instead of signal.

The harder part is behavioral.
Getting engineers to pause long enough to have a decision worth capturing in the first place.

Without that, you just get better storage of shallow thinking.

Collapse
 
pigslybear profile image
Roger Wang

Then maybe the real issue isn’t storage at all.
It’s whether anything worth storing ever happened.

Are we just extracting answers from AI,
or actually learning from them?

Collapse
 
jon_at_backboardio profile image
Jonathan Murray

been catching myself doing this too. accepting AI output without actually understanding why it works.

but i think the skill is shifting not disappearing. the devs who win arent the ones who memorize syntax. its the ones who know when something smells wrong. thats harder than writing it from scratch honestly.

every time weve made a tool cheaper the number of people using it went up not down. spreadsheets didnt kill accountants. cloud didnt kill ops. i think AI coding is the same pattern. the danger isnt the tool. its if we stop bothering to understand what its doing.

Collapse
 
jonoherrington profile image
Jono Herrington

I agree with that. The danger isn’t AI existing in the workflow. It’s losing the ability to tell when the output is wrong, fragile, or just poorly reasoned. That judgment is what keeps the tool useful instead of corrosive.

Collapse
 
capestart profile image
CapeStart

I’ve seen this too. Code works, but when you ask “why,” the answer isn’t there. That’s a different kind of risk.

Collapse
 
jonoherrington profile image
Jono Herrington

Exactly. A missing “why” is a real risk because understanding is what lets a team debug, extend, and trust what got shipped.

Collapse
 
automate-archit profile image
Archit Mittal

The exponent framing is the most useful mental model I've seen for this. I run an automation consultancy and the pattern shows up even outside traditional engineering teams — business users building workflows with AI assistance hit the same wall. They create something that works, can't explain why it works, and when it breaks they have zero debugging capability.

What I've found helps: force an "explain what this does" step before shipping. Not in a code review — earlier. When the engineer finishes a task, they write a 2-3 sentence plain-language explanation of the logic. If they can't, that's the signal.

The other pattern I've noticed: AI-assisted work creates a false ceiling on difficulty. The junior dev's problem wasn't that they couldn't do hard things — it's that AI eliminated the mid-difficulty range where skills actually develop. They went from trivial tasks to AI-generated solutions for hard tasks, with no time spent in the uncomfortable middle where you're stuck for 30 minutes and then have the breakthrough that builds real understanding.

That uncomfortable middle is where expertise actually lives. Preserving it deliberately is a leadership problem, not a tooling problem.

Collapse
 
jonoherrington profile image
Jono Herrington

That “explain it before you ship it” step is strong.

It forces the moment most teams are skipping right now… translating output into understanding.

What you’re seeing with the missing middle is exactly it.
AI collapses the gradient where people used to struggle just enough to learn.

And that’s the leadership tension.
If you optimize purely for output, that middle disappears completely. If you protect it, you take a short-term hit that most orgs aren’t willing to take.

That’s the political tradeoff no one wants to say out loud.

Collapse
 
steriani_karamanlis_ad61a profile image
Steriani Karamanlis

This resonates with something we see in the market data every week. The dependency problem is not just cognitive it is economic. As AI makes coding easier the inference bill for staying sharp compounds quietly in the background and most developers have no visibility into what that actually costs at scale. The teams that maintain genuine coding ability alongside AI assistance are not just protecting their craft they are protecting their ability to audit what the AI is doing and catch the moments when it confidently generates something plausible but wrong. That judgment layer does not get cheaper when the tools get better and it probably gets more valuable. The forgetting risk is real but so is the cost of having no human in the loop who actually understands the code being shipped.

Collapse
 
jonoherrington profile image
Jono Herrington

Strong point. The more teams rely on generated output, the more valuable real human audit ability becomes. That layer does not get cheaper just because generation does.

Collapse
 
harsh2644 profile image
Harsh

This is the question I've been avoiding.

I can ship faster, but can I still code without AI? that's it. That's the fear.

Last week, I needed to write a simple loop. I froze. Not because it was hard. Because I realized I hadn't written one from scratch in months. AI had been doing it for me.

The skill fades silently. You don't notice until you need it and it's gone.

Thanks for naming this. Feels less lonely knowing others feel it too. 🙌

Collapse
 
jonoherrington profile image
Jono Herrington

That’s the part people rarely admit out loud. Skill loss does not usually feel dramatic while it’s happening. It shows up later in a quiet moment when something basic suddenly feels unfamiliar.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

The friction mechanism is the part worth naming more precisely. Bad Stack Overflow answers forced skepticism accidentally — you got burned, you learned to verify. AI removes that forcing function. It's patient, confident, never annoyed when you ask the same question twice. The junior who can't explain his own function isn't lazy. He just never got burned. There was no moment where the confident answer failed publicly and made him rebuild the understanding from scratch.

The exponent framing is right, but it has a second-order effect your piece points at without fully naming: the engineers who notice the drift and pull back are self-selecting into a different capability tier. The ones who don't notice — or notice and don't stop — are compounding in the other direction. The gap between those two groups widens faster than any manager's dashboard will show.

The interview question that surfaces this: not "build a todo app" but "here's 200 lines of AI-generated code, tests pass, it's in production, something is wrong — find it." That's the new entry test. Can they interrogate confident output? Can they find the failure the model introduced while optimizing for elegance? That's the skill the friction used to build accidentally. Now someone has to build it deliberately.

Collapse
 
jonoherrington profile image
Jono Herrington

This is sharp.

Getting burned used to be part of the system. Now it’s optional.

So the engineers who manufacture their own friction are going to separate fast from the ones who don’t. And like you said… that split won’t show up cleanly in any metric leadership is watching.

That interview shift is the right direction too.

Not whether they can produce code.
Whether they can challenge something that already “works.”

That’s the skill that used to develop by accident. Now it has to be designed into how we hire and how we work.

Collapse
 
ayatskov profile image
Alexander

AI create not only technical debt, but cognitive debt as well.

Collapse
 
jonoherrington profile image
Jono Herrington

Yes. And cognitive debt is harder to see because it does not show up in the repo right away. It shows up in weaker reasoning, thinner reviews, and slower recovery when something breaks.