Everyone's writing about the death of junior developers. The anxiety is real. The job market data backs it up. But we're misdiagnosing the problem.
The junior developer role isn't extinct. It's stuck Below the API and we haven't figured out how to pull it back up.
The Real Divide
Below the API is everything AI handles cheaper, faster, and often better than humans: boilerplate, basic CRUD, unit tests for simple functions, JSON schema conversion. Above the API is everything requiring judgment, verification, and context AI can't access: system design, debugging race conditions in production, knowing when to reject a confident-but-wrong suggestion.
Junior developers used to climb from Below to Above by doing the boring work. Write unit tests, learn how systems break. Convert schemas, understand data flow. Fix bugs, build debugging intuition. Now AI does that work. We deleted the ladder.
What NorthernDev Got Right
NorthernDev nailed the career pipeline problem. Five years ago, tedious work like writing unit tests for a legacy module went to a junior developer — boring for seniors, gold for juniors. Today it goes to Copilot.
That's not a hiring freeze. That's the bottom rung of the ladder disappearing.
The result is a barbell: super-seniors who are 10x faster with AI on one end, people who can prompt but can't debug production on the other. The middle is gone. The path from one group to the other is blocked.
What's missing from that diagnosis: the role isn't dead, it's transformed.
The Forensic Developer
NorthernDev suggests teaching juniors to audit AI output — forensic coding. That's exactly what Above the API means.
The old junior role: write code, senior reviews, learn from mistakes. The new junior role: AI writes code, junior audits, learn from AI's mistakes. The skill isn't syntax anymore. It's verification.
The problem is you can't verify what you don't understand. To audit AI-generated code you need to know what it's supposed to do, how it actually works, what will break in production, and why the AI's clean solution is wrong. Those are senior-level skills. We're asking juniors to do senior work without the ramp to get there.
Why Traditional Training Doesn't Work Anymore
Anthropic published experimental research that validates this directly. In a randomized controlled trial with junior engineers, the AI-assistance group finished tasks about two minutes faster but scored 17% lower on mastery quizzes. Two letter grades. The researchers called it a "significant decrease in mastery."
The interesting part: some in the AI group scored highly. The difference wasn't the tool. It was how they used it. The high scorers asked conceptual and clarifying questions to understand the code they were working with, rather than delegating to AI. Same tool. Different approach. One stayed Above the API. One fell Below.
That 17% gap is what happens when you optimize for speed without building verification capability.
A Nature editorial published in June 2025 makes the underlying mechanism explicit: writing is not just reporting thoughts, it's how thoughts get formed. The researchers argue that outsourcing writing to LLMs means the cognitive work that generates insight never happens — the paper exists but the thinking didn't. The same principle applies to code. The junior who delegates to AI gets the function but skips the reasoning that would have revealed why the function is wrong.
The mechanism is friction. When I started, bad Stack Overflow answers forced skepticism — you got burned, you learned to verify. AI removes that friction. It's patient, confident, never annoyed when you ask the same question twice. Amir put it well in the comments on my last piece: "AI answers confidently by default. Without friction, it's easy to skip the doubt step. Maybe the new skill we need to teach isn't how to find answers, but how to interrogate them."
We optimized for kindness and removed the teacher.
What Actually Needs to Change
The junior role needs three shifts in how we define entry-level skills, how we build verification capability publicly, and how we measure performance.
Entry-level used to mean knowing syntax and writing functions. Now it means reading and comprehending code, identifying architectural problems in AI output, and understanding that verification is more valuable than generation. The portfolio that gets you hired in 2026 isn't a todo app — AI generates one in 30 seconds. It's documented judgment: "Here's AI code I rejected and why." "Here's an AI suggestion that seemed right but failed in production." "Here's how I verified this architectural decision."
Stack Overflow taught through public mistakes. That's why we started The Foundation — junior developers need public artifacts that prove judgment, not just syntax. Private AI chats build no portfolio. No proof of thinking. Invisible conversations that leave no trace.
The interview question needs to change too. Not "build a todo app in React" but "here's 500 lines of AI-generated code for a payment gateway. Tests pass. AI says it's successful. Logs show it's dropping 3% of transactions. You have 30 minutes. What's wrong?" That's the new entry test. Can you find the subtle bug AI introduced optimizing for elegance over financial correctness? Can you explain why this clean code fails at scale?
Companies waiting for AI-ready juniors to appear are part of the problem. Nobody is training them. That's your job.
The Economic Reality
Companies see AI as cheaper than juniors. That math only works if you ignore production bugs from unverified code, architectural debt from AI's kitchen-sink solutions, security vulnerabilities AI confidently introduces, and scale failures AI didn't test for.
Cheap verification is expensive at scale. A junior who catches those problems early is worth 10x their salary but only if we teach them how to verify.
NorthernDev asked the right question: if we stop hiring juniors because AI can do it, where will the seniors come from in 2030?
Nobody has a good answer yet. But the companies that figure it out will have a pipeline. The ones waiting for AI to get better will be stuck with seniors who retire and no one to replace them.
The junior developer isn't extinct. The old path — syntax to simple tasks to complex tasks to senior — is dead. The new path runs through verification, public judgment, and the ability to interrogate confident-but-wrong answers before they reach production.
That's not a lower bar. It's a different one.
The ladder didn't disappear. We just forgot we have to build it.
Top comments (35)
Yeah I agree with the basic premise, and you explained it well, but here's the elephant in the room (which, surprisingly, I see rarely mentioned):
It's high time for education (schools, colleges, bootcamps) to adapt and to step up their game!
People are spending a lot of time and money to get a degree or a diploma, only to find themselves ill prepared for these new realities ...
The suggestion is that the onus is on people themselves to bridge this gap between the theory they've been taught (at great expense of time AND money) and reality - they're supposed to study all day to learn a lot of theory (the value of which has now become very questionable) to pass their exams, and then to spend an equal amount of effort in their 'free time' (at night?) to try and learn what really matters ...
That makes no sense to me.
Let's not forget that for many schools/colleges/bootcamps this is good business (as in $$$), but they're now leaving their students high and dry, and scrambling to enter the job market ...
That's just odd, because these skills (architecture, debugging, etc etc) can be taught - if rocket science and theoretical physics can be taught, then this stuff can too ...
I've said it before (but it surprises me how few people recognize this):
It's about time for "formal" education to step up their game, and adapt to these new realities!
You're pointing at the structural problem the piece sidestepped. The individual burden argument only holds if the institution did its job first and for a lot of developers going through traditional education right now, it didn't. They paid for preparation and got theory.
The credential system is what makes this sticky. schools can teach the wrong things for years and still produce graduates that employers hire because the degree signals something separate from the curriculum content. That's what keeps reform slow even when the misalignment is obvious.
"These skills can be taught" is exactly right. architectural thinking, debugging as hypothesis testing, verification instincts. none of this is mysterious. it's just not what the curriculum is optimized for because the curriculum is optimized for the exam, not the job.
The piece should have gone here. you're right that it didn't.
For many years the curricula were fine, now no longer - I can understand the inertia and all that, but they should really start working on adapting their curricula, we're 2 years into the AI coding thing, it's about time ...
2 years in and most curricula haven't moved. The inertia argument explains it but doesn't excuse it. The students paying for the degree don't get those 2 years back.
I understand that it takes time, but the question is whether or not they're making plans to change their curricula - whether they see the writing on the wall and are willing (and planning) to adapt ...
Better to take some time to make good plans and then execute well, than to hastily slap something together ...
But I don't know how "agile" these institutions are, maybe I'm expecting too much ;-)
"Agile" and "university curriculum committee" don't often appear in the same sentence for good reason. The institutions that will move fastest are probably the bootcamps — shorter programs, less accreditation burden, more direct pressure from hiring outcomes. The 4-year degree has more insulation from market feedback which is exactly why it adapts slowest.
the writing is on the wall. whether anyone in the right room is reading it is a different question.
I agree with your analysis - we'll probably see the bootcamps pivot sooner than the academia ...
The hiring process is, not for juniors, I agree.
I have open ears to hear what I can be doing better. Its really deflating. I figured if no one is hiring junior devs then maybe I'll figure out another way to harness my new knowledge for income. Sheesh. Tough crowd. 😅😂
Been coding for 3 years, finished an apprenticeship, and pursuing second bachelor's. There's alot more, but im not here to brag. Just want to support your point! Haha.
3 years of coding plus an apprenticeship plus a second bachelor's is not nothing — that's someone who kept going when it was hard. The market is genuinely difficult right now and "figure out another way to harness my knowledge for income" is exactly the right instinct. The traditional hiring pipeline is broken for juniors but the demand for people who can build things is real.
It's just showing up in different places — freelance, small businesses that need someone who understands both the technology and the domain, direct outreach to founders rather than job boards.
what kind of work have you been building? that's usually where the path forward is visible.
Thanks Daniel, you have given me some things to think about. ✨️
The friction argument is the strongest part of this piece. Stack Overflow taught through public correction — sometimes harsh, but you learned to think before asking because asking badly was expensive. AI is endlessly patient, which sounds like progress but might be the opposite.
What strikes me is that the real gap isn't syntax vs architecture. It's the ability to know what you intended well enough to notice when the output diverges. You can describe a payment flow perfectly and still miss that the AI chose eventual consistency where you needed strong. Not because you don't know the words — because you haven't lived through the failure that teaches you why it matters.
The 'forensic developer' framing undersells it slightly. Forensics is post-mortem. What juniors actually need is something closer to architectural intuition — the ability to hold a mental model precise enough to feel when something's off before it breaks. That used to come from years of boring work. The question now is whether there's a faster path that doesn't skip the understanding.
The forensic framing undersells it.
Post-mortem is the wrong word. What you're describing is pre-mortem intuition.The ability to read code and feel the failure before it happens.
The gap isn't knowing what eventual consistency means. It's reaching instinctively for strong consistency when money is moving because you've been burned when you didn't.
That's not teachable with vocabulary.
Whether there's a faster path. I think yes, but it requires deliberate failure, not delegation. AI-generated code you're required to break before you're allowed to ship it. Actively find the edge case, write the test that exposes it, explain why it fails. That's closer to what the boring work did.
The Below/Above the API framing is sharp, but I'd push back on one thing — verification isn't purely a senior skill. We've seen juniors who audit AI output catch bugs seniors miss because they read slower and question more. The ladder isn't gone, it just starts at a different rung.
The piece implied verification is a senior skill but what you're describing is different. Juniors who read slower and question more catching things seniors miss because seniors pattern-match too fast. That's not skill, that's disposition.
Disposition you can have from day one. Skill needs domain knowledge to apply it. The junior who caught the bug caught it because they didn't assume they understood not because they knew what correct looked like.
So maybe the ladder starts at a different rung and also requires something different there: not "learn to write code" but "learn to distrust output, including your own." Most junior onboarding skips that entirely.
You made a good point! I doubt that they still stop hiring junior developers in 2030. I have 2 main reasons to make. 1- You still need human behind of the machine such as debug, hallucination etc.... 2- Who will replace the senior or the middle levels developers? I doubt that it will be AI in 2030. AI will reshape the roles of junior developers. They will not hire as much junior roles like the pandemic but it will still exist.
thanks Daniel for this excellent article 💯
"NorthernDev suggests teaching juniors to audit AI output — forensic coding" - this is a great suggestion and thanks for including it. 🚀
hello, I am from Berlin, can you help me?
Interesting perspective. Maybe the real challenge is that below the API work was how juniors used to learn. If that's now automated, how do we build their intuition and judgment?
This is important for anyone in the field of education. We're taught that the goal is to create a system, but your point about the "Forensic Developer" changes the game completely. If CRUD and unit tests, the very foundation of a program, are now "Below the API" and handled by AI, how do we demonstrate "Above the API" judgment in our own work when we haven't spent years of tedious work to build up that instinct?
The instinct doesn't come from years of tedious work. It comes from years of being wrong about something specific and tracing it back to the assumption that failed.
AI removes the tedious work but it also removes the failure loop — unless you deliberately build it back in. The junior who catches the edge case AI missed isn't the one who coded more. It's the one who asked "what would have to be true for this to be wrong?" every time AI gave them an answer.
That's the demonstration. Not a portfolio of things you built. A documented record of AI output you questioned, tested, and in some cases rejected — with the reasoning attached. That's Above the API judgment made visible and it doesn't require years. It requires a different habit from the first day.
The education question isn't how to teach the old path faster. It's what the new path looks like when the starting point is AI output rather than syntax.
That’s a strong point. Changing the starting point from “Syntax” to “AI Output” changes how people learn. It seems that the new entry skill is Forensic Debugging, the ability to look at code that works and question why it might be fragile. I shall make notes on my “Rejected AI Outputs” as part of my project reasoning. Thank you for the insight, Daniel.
Rejected AI Outputs" as a portfolio artifact is exactly right. The reasoning attached to the rejection is the proof of judgment — not the rejection itself.
Come back when you have a few. Curious what patterns show up.
The “below the API / above the API” framing explains the current situation better than the usual “AI killed juniors” narrative.
What disappeared isn’t the need for junior developers, it’s the training surface they used to grow on. The boring work wasn’t just labor — it was the environment where you learned how systems actually fail.
What worries me most is the verification gap you describe.
When AI generates the code, the entry-level skill is no longer writing it, but understanding it well enough to question it. That sounds reasonable, but in practice it means we expect people to audit decisions they never had the chance to learn step-by-step. The ladder used to be: write → break → fix → understand → design. Now it’s closer to: read → judge → explain — without the years of breaking things in between.
The portfolio point is also important.
Private AI conversations don’t produce visible proof of thinking. Stack Overflow, blogs, even messy GitHub issues used to show how someone reasons. Now a lot of learning happens in sessions that leave no public trace, which makes it harder for juniors to demonstrate judgment and harder for seniors to trust that judgment.
I don’t think the role is gone either.
But the industry hasn’t rebuilt the feedback loop yet, so juniors are stuck in a place where the work that teaches them is automated, and the work that remains requires experience they don’t have.
Until we design a new ladder on purpose, the pipeline will keep breaking.
This framing connects to something I've been noticing outside of coding too. The "abstraction ceiling" isn't just a developer problem — it's happening everywhere AI generates output that humans are supposed to review.
I'm building an AI email tool, and I see the same dynamic: people trust the AI-generated draft because it looks polished, even when the tone is wrong or the context is off. The surface quality creates an illusion of correctness, just like the junior dev who can scaffold a feature but can't debug the edge cases.
The fix I landed on was making the AI's uncertainty visible — confidence scoring on every draft (High/Medium/Low) so the user knows when to trust and when to read carefully. It's basically giving people a ladder to see over the abstraction ceiling rather than pretending it doesn't exist.
The broader point about needing to understand what's underneath to use AI tools effectively applies to every domain, not just development.
Confidence scoring is still an abstraction. High/Medium/Low tells you when to doubt . it doesn't build the judgment to know what to doubt. If you can't already identify a wrong tone, "Medium" doesn't help. You still trust the polish.
The question is whether the ladder teaches you anything on the way up, or just gets you over the wall faster.
What do users actually do differently when they see Medium versus High?
Some comments may only be visible to logged-in visitors. Sign in to view all comments.