We sprinted into the AI age of autocomplete IDEs now we’re waking up wondering why we forgot how to write a for-loop.
Introduction: this wasn’t a phase
When I wrote Part 1, it felt like a bad habit.
Something you could fix with a little discipline. Fewer prompts. More “intentional learning.” Maybe a weekend project without Copilot to cleanse the soul.
That’s not where we are anymore.
This isn’t a phase. This is the default.
AI-first coding isn’t optional now it’s the baseline. Juniors aren’t “learning fundamentals and then using AI.” They’re learning with AI from day one. Sometimes only with AI. And seniors? Yeah, we use it too. We just tell ourselves it’s fine because we’ve “earned it.”
Somewhere along the way, “I understand this” quietly got replaced with “it works, don’t touch it.”
And look this still isn’t an anti-AI piece. I use these tools daily. I like them. They’ve made me faster, more productive, and occasionally feel like a wizard. But after sitting with this for a while after reading the reactions, watching teams ship, and watching other teams quietly struggle it’s clear something deeper is happening.
We didn’t just speed up coding.
We changed what gets rewarded.
Vibe coding won. Autocomplete won. Shipping won.
And now we’re living with the side effects.
Part 1 was the “oh no” moment.
Part 2 is the sober realization:
This is the environment we’re building careers in now and if we don’t adapt deliberately, we’re going to wake up very fast… and very shallow.
Let’s talk about where this actually leads.
This isn’t a phase anymore this Is the default
When Part 1 blew up, a lot of people framed it like a temporary thing.
A bad habit.
A rough adjustment period.
Something we’d “figure out” once the novelty wore off.
That framing was comforting.
It was also wrong.
This isn’t a phase anymore. This is the default.
AI-first coding isn’t optional now it’s the baseline. Juniors aren’t learning the fundamentals and then using AI. They’re learning with AI from day one. Sometimes only with AI. The IDE opens, autocomplete is already whispering, and that’s just… normal.
And seniors? Let’s not pretend we’re immune.
We use it too. We just justify it better.
“I already know this.”
“I’m just saving time.”
“I’ll clean it up later.”
(We won’t.)
Somewhere along the way, “I understand this” quietly got replaced with “it works, don’t touch it.”
That shift didn’t happen because developers got lazy. It happened because the incentives changed. Fast feedback loops. Faster shipping. Cleaner demos. Green checkmarks in CI. Nobody asks how you got there only whether it runs.
And to be fair: it’s intoxicating.
You type half a thought and the rest of the code appears like it read your mind. The tests pass. The app boots. Slack lights up. Dopamine acquired. On to the next ticket.
The problem is that this workflow trains you to optimize for momentum, not mastery.
You don’t sit with the problem long enough to feel confused. You don’t struggle long enough to build intuition. You don’t make the kinds of mistakes that burn lessons into your brain. The friction gets smoothed out and friction, inconvenient as it is, is where learning actually lives.
So when autocomplete disappears for a moment bad internet, locked-down machine, weird legacy codebase people freeze. Not because they’re bad developers. But because they were never asked to build the mental scaffolding in the first place.
This is why the “just another abstraction” argument feels incomplete.
Yes, we’ve abstracted things before. Compilers. Frameworks. Libraries. Even Stack Overflow. But those tools mostly abstracted labor. You still had to reason. You still had to decide. You still had to understand enough to glue things together.
AI abstracts decisions.
It hands you conclusions without walking you through the reasoning path. You get the what instantly, but the why never forms. And without the why, nothing sticks.
That’s the quiet shift we’re living through.
Not fewer developers.
Not worse developers.
Just developers trained in an environment where understanding is optional until it suddenly isn’t.
And when it stops being optional?
That’s when things get interesting.
“It’s just another Abstraction” and why this one feels different
Every time this topic comes up, the same response shows up like clockwork:
“This is nothing new.”
“People said the same thing about compilers.”
“Assembly devs complained about C.”
“Frameworks ruined fundamentals too.”
And yeah they’re not wrong.
We have been here before.
Every generation of tools abstracts something away. That’s kind of the point. Nobody wants to write assembly to center a div or manually manage memory just to render a button. Abstractions saved us from a lot of pain. They let us build bigger things with smaller teams. They made software accessible to more people.
That’s all good.
But here’s where this one feels different and why the comparison only goes so far.
Most past abstractions reduced mechanical effort.
Compilers turned human-readable code into machine instructions. Frameworks hid boilerplate. Libraries wrapped complexity. Stack Overflow gave you examples. Even calculators removed arithmetic drudgery.
But none of those removed thinking.
You still had to decide what to build.
You still had to reason about why it worked.
You still had to debug your mistakes.
AI doesn’t just abstract syntax.
It abstracts decisions.
It hands you a solution before your brain has even finished forming the question. You don’t walk the reasoning path you teleport to the destination. And teleportation is great for shipping, but terrible for learning how the terrain actually works.
That’s the quiet shift.
With older tools, you still owned the problem. The tool helped you execute. With AI, it’s dangerously easy to let the tool own the thinking while you rubber-stamp the output.
And look sometimes that’s fine.
You’re tired. The task is boring. The code is repetitive. Ship it.
But when that becomes the default? When every hard edge gets smoothed before your brain ever hits it? You stop building intuition. You stop forming mental models. You stop knowing why things behave the way they do.
That’s why this doesn’t feel like compilers or frameworks.
Those tools raised the abstraction ceiling.
AI lowers the cognitive floor.
You can function with less understanding than ever before. And that sounds like progress… until something breaks in a way autocomplete never prepared you for.
Because when abstractions leak and they always do you’re left standing there, staring at a perfectly reasonable-looking block of code, thinking:
“I know this works… I just don’t know why.”
That’s not a moral failure.
It’s a training problem.
And the environment we’re in right now trains speed better than it trains understanding.
Which is fine.
Right up until it isn’t.
The skill split is already happening
You can feel it if you’ve worked on a team recently.
Not in résumés.
Not in titles.
In reactions.
Two developers hit the same problem.
One immediately opens an AI panel, starts prompting, pastes something in, and keeps moving. Confident. Fast. Productive. The code looks clean. The tests pass. Everyone’s happy.
The other one pauses. Reads the code. Traces execution. Maybe opens the docs. Maybe mutters something unprintable under their breath. Slower. Less flashy. Slightly annoyed.
At first glance, the first dev looks better. Obviously.
They’re shipping. They’re efficient. They’re not “overthinking it.”
But give it time.
Because these two devs are on very different trajectories.
The first path creates AI-native developers. Incredible at prompting. Great at stitching together solutions. Comfortable living one autocomplete ahead of the problem. They move fast until the problem stops looking like anything the model has seen before.
That’s when the confidence wobbles.
The second path builds something less visible: a mental model. An internal map of how systems behave. Where state mutates. Where things usually go wrong. They’re slower at first because they’re actually thinking and thinking is expensive.
But that thinking compounds.
The gap doesn’t show up on day one. Or week one. Or even month one. In fact, early on, it looks like the AI-native dev is winning. They probably are.
Then something breaks in a weird way.
Not a syntax error. Not a missing import. A timing issue. A state bug. A production-only edge case that only shows up when traffic spikes or a cache expires or a user does something no one prompted for.
That’s when the split becomes obvious.
One dev keeps prompting. Slight variations. Different wording. More context. Less context. “Try again but simpler.” The answers look plausible. Nothing quite fixes it.
The other dev opens the logs, steps through the code, and starts forming hypotheses. Not because they’re smarter but because they’ve been here before. They’ve seen how things fail when abstractions leak.
This isn’t about talent.
It’s about exposure.
Both devs use AI. Let’s be clear about that. This isn’t some purist fantasy where one person codes by candlelight and refuses autocomplete out of principle.
The difference is who’s in control.
One dev uses AI to accelerate thinking.
The other uses AI to replace it.
And replacement feels amazing right up until you need the thing you outsourced.
The uncomfortable truth is that we’re training two classes of developers in the same environment. One optimized for short-term output. The other optimized for long-term resilience.
The scary part? They look identical for a while.
Same commits. Same features. Same velocity charts.
Until the day they don’t.
And by then, the gap is very hard to close.
Speed Is winning because failure Is delayed
If this feels obvious to you, congratulations you’ve probably been burned already.
Because here’s the uncomfortable reality:
speed is winning right now, not because it’s better, but because the consequences show up late.
Most teams don’t feel the cost of shallow understanding immediately. They feel velocity. They feel momentum. They feel the relief of tickets closing and demos going smoothly.
And from the outside? It looks great.
Features ship faster. Roadmaps move. Management gets updates that say “on track.” No one asks whether the code will still make sense in six months. They ask whether it works today.
That’s not evil. That’s incentives.
Understanding doesn’t show up on dashboards.
Judgment doesn’t get a Jira ticket.
Mental models don’t have KPIs.
Speed does.
So the system naturally rewards the dev who ships fast even if they’re shipping something fragile. Especially if the fragility won’t surface until later, when the sprint is over, the milestone is hit, and the praise has already been handed out.
And when things finally break?
It’s rarely clean or obvious.
It’s not “this was written badly.”
It’s “this edge case slipped through.”
It’s “this assumption no longer holds.”
It’s “no one really knows why this behaves like this.”
By then, the original author might be on another team. Or another company. Or another startup entirely, proudly shipping fast somewhere else.
The cleanup lands on whoever’s left.
This is why speed feels free at first.
The bill comes due later, in the form of brittle systems, unreadable code, and a growing fear of touching anything that “works.” Every change becomes a risk. Every refactor feels like defusing a bomb.
And here’s the part nobody likes admitting:
AI makes this worse in subtle ways.
Not because it writes bad code all the time it often doesn’t. But because it makes mediocre decisions look polished. The code is formatted. The names are reasonable. The structure looks familiar. It passes tests that only cover the happy path.
It looks done.
So it gets merged.
And that’s how shallow solutions sneak into long-lived systems not through incompetence, but through momentum.
This is also why the argument “just slow down and think more” doesn’t work at scale.
Teams don’t slow down. Companies don’t slow down. Deadlines don’t care about pedagogy. You’re rewarded for shipping, not for understanding until understanding becomes the only thing that can save you.
That’s when everyone suddenly cares.
When production is on fire.
When customers are mad.
When the logs don’t make sense.
When the AI confidently suggests things that don’t help.
Speed wins the sprint.
Understanding wins the incident.
And the longer we pretend those are the same thing, the more painful the correction gets.
Debugging is where AI stops and you find out what you actually know
This is where everything falls apart.
You can lean on AI all day to write code. It’s great at that. Scary good, even. But the moment something breaks in a way you didn’t explicitly prompt for, the illusion cracks.
Because debugging is not autocomplete-friendly.
Debugging doesn’t care how confident the code looks.
It doesn’t care that the syntax is clean.
It doesn’t care that the solution was “popular.”
It only cares whether you understand what’s happening.
And this is where the gap shows.
Ask a junior dev to debug a piece of code Copilot helped them write not refactor it, not rerun it, but actually explain why it’s broken and you’ll see it. That pause. That look. The sudden loss of confidence.
Not because they’re dumb.
Because they never built the mental model.
Debugging is brutal like that. It exposes everything you skipped.
When something breaks, AI can guess. It can suggest. It can confidently hallucinate fixes that sound right. But it doesn’t actually know your system. It doesn’t remember how state mutated three calls ago. It doesn’t feel the shape of your codebase. It doesn’t have intuition.
You do.
Or you don’t.
This is why debugging has always been the real separator between developers who understand systems and developers who just assemble them.
When you debug, you’re not searching for syntax. You’re running a simulation in your head. You’re asking:
- What changed?
- What assumptions are being violated?
- Where does this data actually come from?
- Why does this only happen sometimes?
Those questions don’t autocomplete themselves.
And AI-written code makes this harder, not easier. Not because it’s bad, but because it’s unfamiliar. You didn’t arrive at the solution it was handed to you fully formed. So when it fails, you’re debugging someone else’s thinking without ever having seen their reasoning.
That’s exhausting.
It’s also why so many teams end up afraid of their own code. Everything “works,” but no one wants to touch it. Every fix feels risky. Every change feels like it might summon a production incident.
Because the understanding never formed.
Here’s the part nobody likes admitting:
You can’t debug vibes.
You can’t reason your way out of a problem you never reasoned your way into.
AI is a great assistant during debugging logs, hypotheses, rubber-duck explanations, sanity checks. But it cannot replace the core skill: knowing how systems behave when they’re under stress.
And that skill only comes from doing the hard part yourself. From writing code that breaks. From fixing bugs that made you feel stupid. From sitting with a problem long enough that you start to recognize it next time.
“You can’t debug what you never understood.”
And no amount of prompting changes that.
This is why debugging is becoming a scarce skill again.
Not because fewer people are smart but because fewer people are being trained to think this way. And scarcity changes value.
When everything is fast, the person who can slow down chaos is suddenly indispensable.

We removed the struggle and took the learning with It
Here’s the part no one really wants to say out loud:
Learning used to be uncomfortable.
And that discomfort was doing a lot of work for us.
You didn’t just learn by consuming solutions. You learned by getting stuck. By writing something that almost worked. By staring at a bug for an hour and then realizing the mistake was painfully obvious in hindsight.
That friction wasn’t a tax.
It was the mechanism.
AI smooths all of that out.
You don’t sit with confusion anymore. The moment things feel hard, you prompt. The moment you hesitate, autocomplete fills in the gap. You’re never forced to wrestle with the problem long enough for it to reshape how you think.
It feels efficient.
It feels productive.
It feels amazing.
And it quietly starves the part of your brain that actually learns.
This is why so many people describe the same experience: the blank screen panic. The moment when the AI is unavailable bad connection, locked-down machine, weird environment and suddenly your brain goes quiet.
Not because you forgot everything.
Because you never had to retrieve it.
Memory isn’t built by recognition. It’s built by recall. By forcing your brain to reach for something without a safety net. AI turns everything into recognition. You see the solution and think, “Yeah, that makes sense.”
Of course it does. You didn’t have to earn it.
This is also why confidence can get weirdly inflated. When everything works, it feels like progress. But the first time you’re asked to explain a decision, trace a bug, or build something without scaffolding, that confidence collapses fast.
Not dramatically. Quietly.
You start avoiding certain tasks. You lean harder on tools. You optimize for not feeling stupid. And that’s how the loop closes.
Here’s the subtle trap:
Struggle feels like failure in the moment.
But it’s actually feedback.
Remove the feedback, and you remove the learning.
That’s why the gym analogy keeps coming back. Watching someone lift weights doesn’t make you stronger, no matter how good the form looks. You need resistance. You need reps. You need the awkward phase where everything feels heavier than it should.
Coding is the same.
If every hard part gets smoothed away before your brain has a chance to engage, you don’t build intuition. You don’t develop taste. You don’t recognize patterns when things go wrong.
You just get really good at shipping things you don’t fully own.
This isn’t about suffering for its own sake. It’s about productive struggle the kind that forces your brain to form connections instead of borrowing them.
AI didn’t make us worse developers.
It made it easier to skip the part that turns effort into skill.
And once you’ve skipped it long enough, getting it back feels… harder than it should.
That’s the cost.
Knowledge without context is the new technical debt
We usually think of technical debt as something that lives in code.
Messy functions. Weird hacks. TODO comments that age like milk. The kind of stuff you promise you’ll clean up “later” and then quietly learn to fear.
But there’s another kind of debt forming now and it doesn’t live in the repo.
It lives in people.
AI gives you answers detached from their origin. You get working code without the story of how it came to be. No false starts. No rejected approaches. No explanation of why this solution was chosen over three others.
Just the result.
That’s fine when everything keeps working. But when something breaks or needs to change you realize how much context was missing.
You don’t know:
- What assumptions this code relies on
- What edge cases were ignored
- What tradeoffs were made
- What constraints shaped the solution
You only know that it worked once.
That’s not understanding.
That’s borrowed confidence.
And borrowed confidence has a nasty habit of disappearing at the worst possible time.
This kind of debt is subtle because it doesn’t slow you down immediately. In fact, it often speeds you up. You move faster because you don’t carry the weight of the decisions until you’re forced to revisit them.
That’s when things get painful.
The system becomes fragile. Touching one part breaks another. No one’s sure what’s safe to change. Every fix feels like guesswork wrapped in hope.
And when the people who wrote the code leave?
The debt doesn’t just surface it explodes.
Because the knowledge never lived in the codebase. It lived in autocomplete suggestions and half-remembered prompts. There’s no trail to follow. No reasoning to reconstruct. Just a pile of “this seemed right at the time.”
This is how teams end up trapped in systems that technically work but feel untouchable. Not because they’re ancient, but because no one truly understands them.
AI didn’t create this problem but it accelerates it.
By collapsing the distance between problem and solution, it collapses the context in between. And context is the thing that lets systems evolve instead of calcify.
The scary part isn’t that AI writes code for us.
It’s that it lets us own less of it.
And ownership is the difference between software you maintain and software you survive.
Answers are cheap judgment is the part we’re losing
One of the most common defenses of AI in coding goes like this:
“It can explain the code.”
“It’s actually a great teacher.”
“It walks me through the logic step by step.”
And yeah sometimes it does.
But explaining what something does is not the same as exercising judgment about whether it should exist at all.
AI is great at producing answers. Confident ones. Well-structured ones. Ones that sound reasonable enough to pass a code review if everyone’s skimming.
What it doesn’t do is push back.
It won’t tell you that the approach is technically correct but architecturally cursed. It won’t warn you that this is going to be miserable to maintain. It won’t say, “This works, but I wouldn’t do it like this.”
That’s judgment. And judgment doesn’t come from pattern matching it comes from experience, mistakes, and consequences.
Real learning doesn’t happen when someone hands you an answer. It happens when someone challenges your assumptions. When a teammate asks, “Why are we doing it this way?” and forces you to defend the choice.
AI doesn’t argue with you.
It agrees with you.
If you prompt it toward a bad idea, it’ll happily help you execute that bad idea with incredible efficiency. It doesn’t feel the pain of future you trying to maintain this thing. It doesn’t remember the incident that taught you never to do this again.
That’s why communities still matter so much even more now.
Docs explain what.
Tutorials show how.
People explain why.
And the why is usually messy. Full of tradeoffs. Context-dependent. Sometimes contradictory. That friction is where judgment forms.
AI flattens that messiness. It turns contested decisions into clean answers. And clean answers feel good right up until reality intrudes.
This is also why code reviews feel different lately. Fewer questions. Fewer “are we sure?” moments. More polite approvals. The code looks fine. The explanation sounds fine. No one wants to be the person who slows things down.
So bad decisions don’t get challenged. They get merged.
Again, not because anyone is incompetent but because the system rewards agreement and speed over debate and depth.
Answers are everywhere now.
Judgment isn’t.
And the more we outsource judgment, the harder it becomes to recognize when something is subtly wrong even when everything looks right.
But this is just another abstraction” let’s be honest about that
At this point, someone is yelling at their screen:
“This is the same argument people made about compilers.”
“Or garbage collection.”
“Or frameworks.”
“Or IDEs.”
And they’re not wrong.
Every generation of devs thinks their abstraction is fine, and the next one is ruining the craft. Assembly folks hated C. C folks hated Java. Java folks hated JavaScript. Everyone hated jQuery until they missed it.
So yes AI is another abstraction layer.
But here’s the part that matters, and it’s subtle:
Most abstractions still forced you to think.
A compiler doesn’t decide what to write it decides how to translate what you already reasoned about.
A framework gives you structure, but you still have to choose the architecture.
Even Stack Overflow made you describe your problem in human language before copying anything.
AI is different because it short-circuits the thinking step itself.
You don’t have to reason first and then encode that reasoning. You can just gesture vaguely at a goal and let the model fill in the blanks. And the better the models get, the less obvious that gap becomes.
That’s the danger zone.
Not because AI is “too powerful” but because it’s just powerful enough to let you skip forming a mental model while still producing plausible output.
Abstractions usually sit below your reasoning.
AI sits inside it.
That’s why this feels different. Not unprecedented just riskier in a new way.
And here’s the important clarification:
If you already have strong fundamentals, AI is incredible.
If you don’t, AI will happily help you build on sand.
That’s not moral judgment. That’s physics.
A senior dev using AI accelerates.
A junior dev using AI often bypasses the reps that would’ve made them senior in the first place.
That doesn’t mean “don’t use AI.”
It means pretending this is exactly the same as past abstractions is intellectually lazy.
This abstraction doesn’t just hide complexity.
It hides cause and effect.
And if you never learn cause and effect, you’re not abstracting you’re outsourcing thinking.
The real risk isn’t job loss it’s skill atrophy
Everyone’s obsessed with the wrong apocalypse.
“Will AI replace developers?”
“Will there be jobs left?”
“Will juniors be obsolete?”
Those are loud questions. They get clicks. They get panels.
But the quieter, scarier risk is this:
A lot of developers will stay employed while slowly becoming worse at the actual craft.
Not useless. Not unemployable. Just… brittle.
They’ll ship features.
They’ll close tickets.
They’ll look productive.
Until something weird happens.
A production issue with no obvious repro.
A performance regression that doesn’t show up in benchmarks.
A system that should work but doesn’t and no one can explain why.
That’s when skill atrophy shows.
Not in day-to-day CRUD work.
In edge cases.
In outages.
In moments where there is no autocomplete for judgment.
This is how teams get fragile without realizing it. Everything works… until it doesn’t. And when it breaks, nobody knows how to reason from first principles anymore. Everyone asks the model. The model guesses. The guesses stack.
Eventually, someone has to actually think.
And the fewer people left who can do that comfortably, the more expensive those moments become.
This isn’t about elitism.
It’s about resilience.
The most valuable dev in the room isn’t the fastest typer or the best prompter. It’s the one who can say:
“Okay. Let’s slow down. What do we know for sure?”
That person isn’t panicking.
They’re not guessing.
They’re rebuilding the mental model from the ground up.
That ability doesn’t come from AI usage stats.
It comes from years of friction, mistakes, rewrites, and painful debugging sessions you didn’t skip.
AI didn’t kill the coding brain.
But it makes it very easy to let it quietly atrophy.
And the scary part?
You won’t notice until you actually need it.
Conclusion: You don’t lose skills you leak them
No one wakes up and forgets how to code.
It happens quietly.
You stop writing things from scratch because autocomplete is faster.
You stop debugging deeply because the model “probably knows.”
You stop questioning solutions because they look right.
You stop feeling friction and mistake that for progress.
Then one day, the safety net disappears.
Copilot is down.
The prompt doesn’t help.
The bug doesn’t reproduce.
The architecture doesn’t make sense anymore including the parts you shipped.
That’s the moment this article is about.
Not AI doom.
Not job loss panic.
Not some romantic “back in my day” nostalgia.
Just the realization that thinking muscles atrophy when they’re not used even if you’re still productive.
AI didn’t steal your skills.
It offered a shortcut.
You took it.
Over and over.
Because of course you did.
The fix isn’t to reject AI. That ship sailed.
The fix is choosing where you still struggle.
“No autocomplete.”
“No prompting.”
“No scaffolding.”
Just you, the problem, and the mess.
Because the struggle isn’t a tax.
It’s the training.
The devs who survive what’s coming aren’t the ones who never use AI —
they’re the ones who can still function when it’s gone.
They can reason.
They can debug.
They can explain.
They can rebuild understanding instead of pasting answers.
In a world where everyone is getting faster,
the dev who goes deeper still wins.
That’s not a trend.
That’s gravity.
Helpful resources:
- Harvard’s CS50 Introduction to Computer Science Free, world-renowned intro to core CS concepts algorithms, data, abstraction, problem solving. Harvard CS50 (edX) Intro to Computer Science Harvard Online
- CS50 on YouTube Full lecture series you can watch at your pace. CS50 Full Course on YouTube YouTube
- CS50P Intro to Programming with Python Focused beginners’ coding path via CS50. CS50 Python Course edX
Top comments (0)