The desk next to yours is empty. It used to belong to a junior developer. It's not coming back.
Introduction: The Empty Desk Next to You
Look at your company's open engineering reqs. Chances are, you're hiring for Senior, Staff, or Principal engineers. The junior reqs? Frozen. Or quietly removed.
This isn't anecdotal. Stanford's Digital Economy Lab found that employment for the youngest software developers dropped 13% below its late-2022 peak by mid-2025, while older developers in the same fields remained stable. A Harvard study analyzing 62 million workers across 285,000 firms found that following GenAI adoption, junior employment declined sharply in adopting firms relative to non-adopters, while senior employment remained largely unchanged, driven by slower hiring rather than increased separations or promotions. SignalFire's 2025 State of Tech Talent Report found that new graduate hiring at major tech firms has fallen over 50% since 2019.
The economic logic is straightforward. Why pay a junior engineer \$80K to write boilerplate when Claude Code does it for \$20 per month? The math is brutal, the incentive is real, and companies are acting on it.
But this creates a problem that doesn't show up on any quarterly earnings call: we are eating our seed corn. If today's juniors don't get the reps, there will be nobody to become the seniors who clean up AI's mess in 2030. AWS CEO Matt Garman put it bluntly when he called the trend of cutting junior hiring "one of the dumbest things I've ever heard", asking: "How's that going to work when ten years in the future you have no one that has learned anything?"
That's the question this article is trying to answer.
The Death of "Wax On, Wax Off"
The Dreyfus Model of Skill Acquisition describes five stages of competence: Novice, Advanced Beginner, Competent, Proficient, Expert. The critical insight is that you can't skip stages. You progress by doing, failing, and building pattern recognition through repetition. In the Karate Kid, Mr. Miyagi doesn't explain the philosophy of karate on day one. He has Daniel wax a car, sand a floor, and paint a fence. The muscle memory comes first. The understanding follows.
Mr. Miyagi's "wax on, wax off" method is the original onboarding framework: repetitive, concrete tasks that build muscle memory before the deeper principles are ever explained.
Juniors used to learn the same way. They wrote the unit tests. They built the CRUD endpoints. They handled the CSS tickets that nobody else wanted. The work was boring. That was the point. In doing repetitive, well-scoped tasks, they built the mental model of how a codebase holds together. They learned to read error messages. They developed the intuition for where bugs hide.
The Missing Rung: AI has automated the entry-level tasks that juniors used to use as their training ground, leaving a gap that's impossible to jump.
The bottom rungs of the career ladder, the boilerplate code, the unit tests, the CRUD endpoints, have been removed. And we're now telling juniors to jump straight to the top.
The result is what you might call the Hollow Senior: a developer with five years of "experience" that is really five years of prompting AI, accepting outputs, and shipping features without ever debugging a race condition, optimizing a slow query, or understanding why a seemingly simple database index choice can bring down a production system at scale. They have the title. They lack the intuition.
The research backs this up. Aalto University researchers studied a firm where automation dependence eroded core skills so thoroughly that when the system was removed, employees could no longer perform basic tasks. A Microsoft and Carnegie Mellon study found that higher AI confidence directly correlated with reduced critical thinking. And a METR randomized controlled trial found that experienced developers using Cursor completed real-world tasks 19% slower with AI than without, while still believing the tools would save them around 24% of their time.
The perception-reality gap is the most dangerous part. Developers, and by extension their managers, are systematically overestimating how much they're learning and growing.
The Problem with "Volume" Metrics
Here's where the measurement problem compounds everything.
If you evaluate a junior engineer on lines of code, tickets closed, or PR velocity, you've just handed them the wrong objective function. Goodhart's Law states that when a measure becomes a target, it ceases to be a good measure. In software engineering, it's always been a problem. Under AI, it becomes catastrophic.
A junior developer measured on ticket velocity has a clear path to success: accept the first thing Cursor outputs, open a PR, move to the next ticket. The metric goes up. The learning doesn't. They're not a developer anymore. They're a human proxy for an LLM, and GitClear's analysis of 211 million changed lines of code shows exactly what this looks like in a codebase: refactoring activity dropped from 25% of changed lines in 2021 to under 10% by 2024. Copy-pasted code increased nearly 50%, from 8.3% to 12.3% of changed lines.
Martin Fowler has argued for decades that "copy and paste programming leads to high lines of code counts and poor design." AI-assisted development has automated copy-paste at scale. The metrics look great. The codebase quietly degrades.
And teams reinforce this. If your sprint retros celebrate ticket throughput and your 1:1s ask "what did you ship this week?", you're training your juniors to optimize for output over understanding. It's Goodhart's Law on steroids, because AI gives juniors the ability to hit the metric convincingly without doing the underlying work.
Kent Beck's model frames this as Effort, Output, Outcome, Impact. The mistake is measuring juniors on Output when what actually matters for their development is the quality of the Effort. Are they building understanding? Are they encountering and overcoming resistance? Or are they just turning prompts into PRs?
The Solution: Optimize for "Learning Velocity"
We need to change the definition of a junior's job from "shipping code" to "acquiring context."
That sounds idealistic until you think about what a junior is actually worth to your organization. The value is not their output today. It's their judgment in three years. Every task they genuinely understand, every bug they trace to its root cause, every architectural tradeoff they see play out in production, that's the compounding asset you're building. AI can inflate short-term output. It cannot shortcut the accumulation of engineering intuition.
This is where Span offers a genuinely different lens. Rather than counting commits or tickets, Span tracks ramp-up patterns at the developer level, giving engineering managers visibility into whether a junior is actually growing in scope and independence, or whether they're stuck in a loop of simple tasks regardless of how many PRs they're opening.
The key metric is Onboarding Velocity: tracking not just whether a junior merged their first PR, but how long it takes to reach their 10th, their 20th, and what the complexity profile of those PRs looks like over time. A junior who hits their 10th PR in week two by mass-prompting Cursor is not on the same trajectory as one who hits it in week six by working through progressively harder problems with genuine comprehension.
The detail Span surfaces is the rate of growth: is a junior's scope expanding over time, or are they still doing the same class of task at month four that they were doing at month one? That distinction is invisible to any tool that only counts activity. It's the difference between a junior who is on track to become a strong mid-level engineer and one who will be a permanent prompt-relay.
Practical Mentorship for the AI Era
Reframing the metric is necessary but not sufficient. You also have to change what you ask juniors to do, and how you structure your involvement.
Three practices that actually work:
1. The "No-AI" Sandbox. Assign specific tasks where AI assistance is off-limits. These don't have to be large. In fact, they shouldn't be. The point is forcing a junior to build a mental model before reaching for a shortcut. Debug this failing test without any external help. Trace this API call from the controller to the database by reading the code. The discomfort is the mechanism. You're not punishing them; you're creating the conditions for the muscle memory that AI use requires but cannot build on its own.
2. Reverse Code Review. Have your junior review the AI's output, or yours. Ask them to explain every line they didn't write. If they can't, that's diagnostic information, not a performance failure. It tells you where the gaps are and where to focus mentorship. The Engineering Enablement approach suggests requiring juniors to annotate AI-generated code in PRs with explicit notes on what they verified, what they changed, and why they trusted or rejected sections of the output. This turns passive AI use into an active comprehension exercise.
3. Measure "Review Depth." Span's PR review tracking lets you see whether a junior is leaving substantive comments on other people's code or just approving ("LGTM"). A developer who asks meaningful questions in review, who catches edge cases, who pushes back on naming or architectural choices, is building the judgment that will compound. A developer who rubber-stamps everything is not. These signals are invisible without deliberate measurement.
The broader reframe here is that mentorship in the AI era is less about showing juniors how to write code and more about teaching them how to interrogate it. The hard skills shift: reading and evaluating code becomes more important than writing it from scratch. System-level reasoning, the ability to ask "what happens to this service when that one fails?", becomes more valuable, not less, because AI tools are poor at it. Addy Osmani describes senior engineers increasingly serving as "AI code reviewers," which means the skill of critical evaluation is now both the most important skill a junior can develop and the one most at risk of atrophy.
This takes explicit investment. As Wanderson Lacerda argues, training juniors is no longer a byproduct of getting work done. It's a capital expense. You have to budget for it, structure it, and measure it, or it won't happen.
Conclusion
The short-term math on cutting junior hiring is correct. AI is cheaper than a new grad for boilerplate. But the five-year math runs the other way. The systems AI is generating today will need human engineers to maintain, extend, and fix. The Google DORA 2024 Report found that for every 25% increase in AI adoption, delivery stability decreased 7.2%. Somebody has to own that stability. That somebody needs deep engineering judgment. That judgment takes years to build, and you can't build it if you stop hiring the people who need to develop it.
Mentorship is no longer a nice-to-have program on the culture page of your handbook. It's a survival strategy for the industry. Protect the junior role, not out of charity, but because your senior pipeline in 2030 depends entirely on the juniors you invest in today.
The teams that figure this out, that measure learning velocity instead of ticket count, that create deliberate space for skill formation alongside AI use, are the ones who will have senior engineers with real judgment when the rest of the industry is trying to figure out why their AI-generated codebase is on fire and nobody knows why.
Are you worried about the next generation of engineers? Share your best strategy for mentoring in the age of AI in the comments below.

Top comments (1)
I have recently joined the industry in like a month and a half, and I'm already feeling the "brain-rottenness" of using AI to do my job.
As a junior, I actually found this the hard way. All it took was a "get data in a messy Snowflake database" to really make my weak areas appear.
Due to them, I noticed I relied on AI too much, beyond just coding itself.
Now I've found that yes, AI helped me a lot, but it has also been quietly hurting me at the same time.
It helps me get out of time crunches and fast output tasks, but it always hurts me regarding understanding and gaining that "intuition" that you mention, which will be valuable in 5+ years.
It was an eye-opener, and I felt somewhat of an imposter syndrome due to it.
Because of it, I've devoted myself to diving into fundamentals again, and your specific tips on how to deal with juniors will also be done!
Thanks for the insights, @jakkie_koekemoer !!