DEV Community

Mladen Stepanić
Mladen Stepanić

Posted on

AI Broke the Knowledge Pipeline, Curiosity Can Save It

The junior developer market is in freefall. If you're early in your career right now, you already know this. Stanford's Digital Economy Lab looked at payroll data from ADP and found a roughly 20% decline in employment for software developers aged 22–25 since late 2022, concentrated in occupations where AI is replacing rather than augmenting human labor. Indeed's Hiring Lab reports that tech postings for junior-level titles are down 34% from five years earlier, while senior postings are down only 19%. The entry-level positions dried up, the bootcamp promises turned out to be lies, and the seniors you were supposed to learn from are too busy arguing about which AI agent to use. Everyone has an opinion on why this happened. Most of them are wrong, or at least incomplete.

Here's what I actually think, after 16 years in this industry and watching this unfold up close.


The Deal Changed, And Nobody Told You

Companies used to hire junior developers for a simple reason: they needed hands. Someone had to write all that code. It didn't matter if those hands belonged to someone who was still figuring things out - the work was defined enough, the tasks were small enough, and the cost was low enough that it made sense. Companies would hire ten juniors on the off-chance that two or three of them would eventually become capable seniors. The rest were interchangeable, and everyone quietly knew it.

AI ended that deal overnight.

The tasks that justified hiring a junior - the boilerplate, the repetitive edits, the scaffolding, the "just type this out" work - are now handled faster and cheaper by a $20 monthly subscription. Companies didn't eliminate junior roles out of malice. They eliminated them because the economic argument for those roles evaporated. What the market wants now is someone who can direct AI, evaluate its output, catch its mistakes, and build things that actually work. That's a medior skill set, minimum. The runway that used to exist — the grace period where you could grow into the job while doing it — is gone.

This is corporate responsibility without the fraud. Companies made a rational economic decision that has irrational long-term consequences for the entire industry. More on that later.


The 10% Problem

In my 16 years, I've worked with a lot of developers. I've hired some of them. And I've noticed a pattern that I don't hear many people say out loud: roughly 10% of the people who call themselves developers are genuine geeks. They want to tear things apart and see how they work. They go down rabbit holes nobody asked them to go down. They get unreasonably interested in why something is slow, or how a network request actually travels from a browser to a database, or what happens at the memory level when you create an object.

The other 90% are not bad people. Many of them are competent. Some of them would have eventually caught the bug — that moment where something clicks and curiosity ignites — if they'd been given enough time and the right environment. But most of them got into programming for the money, or because someone told them it was a safe career, or because it seemed more interesting than accounting. They settle into a role, learn the beaten path, and stay on it. A framework change is a crisis. A domain change is a nightmare. The goal, consciously or not, is to reach retirement without too many surprises.

The 10% were always going to be fine. AI just gave them better tools.

The 90% are the ones caught in the collapse.

My honest opinion about this is: I don't think we can assume the 90% were born less curious. I think a lot of them had their curiosity educated out of them before they ever wrote a line of code.


What the Classroom Did First

I grew up in the Balkans. I remember my high school informatics teacher insisting, with complete confidence, that the fastest CPU available was a 100MHz Pentium. I had a 667MHz processor sitting on my desk at home. I was fifteen.

I was lucky and headstrong enough to know he was wrong and to have an actual, physical proof of him being wrong so I kept the curiosity anyway. But I remember being on the edge - that moment where you start to wonder if maybe you're the one who's confused, if maybe the system knows something you don't. Most kids, faced with that kind of authoritative wrongness, learn the lesson the system actually teaches: don't question, just answer what's on the test.

This isn't unique to the Balkans, and it isn't unique to informatics teachers. Research backs this up. Susan Engel, a developmental psychologist at Williams College, spent months observing elementary school classrooms with the specific goal of studying curiosity in children. What she found was that it was almost impossible to make comparisons because there was such an astonishingly low rate of curiosity in any classroom she visited. The children had simply learned not to bother wondering. As one education researcher put it: too many kids start out as exclamation points and question marks, and leave school as plain periods.

The mechanism isn't malice. It's institutional inertia - the same force that kept my teacher confidently wrong about CPUs while the industry moved on without him. The system rewards correct answers on standardized tests. It punishes questions that don't fit the schedule. It produces students who are very good at performing knowledge and very bad at seeking it.

That's the junior developer who arrives at their first job knowing the syntax but not the why. Who can follow a tutorial but freezes when the tutorial ends. Who is threatened by a framework change because the framework was the knowledge, not the understanding underneath it.


The Curiosity Science

Your instinct might be that curiosity is something you either have or you don't — that the 10% were born that way and the rest were always going to settle. The science is more interesting than that.

Researchers distinguish between two types of curiosity, a framework first articulated by psychologist Daniel Berlyne in the 1950s. Perceptual curiosity is the basic animal impulse toward novelty - the thing that makes a cat investigate a new box. Epistemic curiosity is the human-specific drive to close gaps in understanding, to not be able to let a question go until it's answered. This second type is what separates the developer who reads the error message from the one who understands why the error happened.

Here's the important part: epistemic curiosity is linked to dopamine. A 2014 study from UC Davis demonstrated that curiosity activates the same reward circuitry as food or money, and that information you're curious about is remembered better - mediated by dopaminergic pathways connecting the midbrain to the hippocampus. Your brain rewards you for figuring things out. The more you experience that reward, the more you seek it. But if an environment consistently teaches you that questions are unwelcome, that the answer is what matters and not the understanding, that deviation from the expected path is a risk rather than an opportunity: the reward circuit doesn't fire. And what doesn't fire, eventually doesn't reach for the trigger.

The 10% aren't more curious by birth. They're the ones who, for whatever reason - stubbornness, a good teacher, a computer at home, a parent who encouraged questions - kept getting that dopamine hit despite the system. The curiosity survived because something protected it.

For the 90%, something didn't.


The Broken Pipeline

Here's the part nobody in a position of power wants to say: companies are now paying for a problem they helped create.

The junior developer role wasn't just an economic arrangement. It was the industry's apprenticeship system. It was how experienced developers became mentors. It was how institutional knowledge transferred. It was the mechanism by which someone who was curious but unproven could prove themselves, fail safely, get corrected, and eventually become someone capable of doing the hard things.

That mechanism is gone.

The seniors who remain are stretched thin, mentoring nobody because there's nobody to mentor, and optimizing their own work with AI instead of passing knowledge to the next generation. Companies got what they optimized for - lower costs in the short term - and are quietly building toward a cliff.


The Long Game Nobody Is Talking About

I want to say something bold, and I want to be honest that it's a prediction rather than a certainty.

Big tech is betting everything on AI. The assumption baked into every major tech company's strategy right now is that AI will keep improving fast enough to compensate for the disappearing human pipeline. AGI is treated as an inevitability. The race is on.

Apple's own researchers published findings that should give everyone pause. Their GSM-Symbolic paper, presented at ICLR 2025, found that current AI models - including the most advanced reasoning models from the leading labs - show no evidence of genuine formal reasoning. The behavior is better explained by sophisticated pattern matching, so brittle that simply changing the names or numbers in a problem can produce substantially different answers. A follow-up paper, "The Illusion of Thinking", argued that these models collapse completely at high-complexity tasks (though it's worth noting that paper has attracted methodological pushback over whether the observed "collapse" reflects reasoning limits or just output-token limits). Either way, the underlying critique is not fringe. It's peer-reviewed research from inside one of the largest tech companies in the world.

Then there's the geopolitical precedent that nobody in tech seems to want to learn from. China spent decades supplying the world with cheap manufacturing. Companies optimized hard for that capacity, hollowed out their domestic capability, and built supply chains that assumed the arrangement was permanent. It wasn't. China shifted its pricing politics and its geopolitical posture, and suddenly the companies that had offshored everything were scrambling. The US in particular rushed to bring manufacturing back and discovered that capacity and expertise don't return quickly as you can't just throw money at the problem and have a semiconductor fab running in three years. TSMC's Arizona fab was announced in 2020 and is still ramping in 2025. Intel's Ohio fabs, originally scheduled to start production in 2025, have been delayed to 2030 and 2031. The knowledge pipeline had been broken for too long.

The parallel to AI should be uncomfortable. If the technology shifts, if the economics change, if the geopolitics of compute become complicated, if the model that was cheap becomes expensive: the companies that eliminated their human development pipeline will face the same problem. You can't rebuild a senior developer cohort in three years either. The people who should have been learning are doing something else now.

Add to this the regulatory risk. AI has already attracted the attention of governments in ways that could move fast. The EU AI Act is law, with prohibitions on certain practices already in force and full enforcement arriving in August 2026. The US has started restricting AI chip exports through the BIS AI Diffusion Rule. And in February 2026, the Trump administration ordered every federal agency to stop using Anthropic's Claude - followed weeks later by the Pentagon formally designating Anthropic a "supply chain risk to national security," the first time that label has been applied to an American company rather than a foreign adversary. The trigger wasn't that the AI was too dangerous; it was that Anthropic refused to remove restrictions preventing Claude from being used for mass domestic surveillance and autonomous weapons. The position has softened since - the same administration is now quietly encouraging Wall Street banks to test Anthropic's latest model but the precedent is set, and that's what matters. Labeling certain AI applications a national security risk (which is no longer a theoretical scenario) puts regulatory walls around them overnight, and those walls outlast the political moment that built them. We have watched entire technology categories get constrained by regulation faster than anyone expected. AI is not immune, and the companies betting their entire talent strategy on its continued availability are making a concentrated bet on a highly uncertain variable.

And the electricity problem is real and getting less ignorable. The International Energy Agency projects that electricity consumption by data centers could more than double by 2030, largely driven by AI workloads. Goldman Sachs Research estimates a 165% increase in data center power demand over the same period. The costs are not going down as fast as the hype suggests. At some point, the economics of "just use AI" start to look different than they do today.

If the technology plateaus - or even just slows - companies will look up and discover that they eliminated the junior pipeline, stretched their seniors to the limit, and have nobody coming up behind them. The people who would have spent three years learning the hard things, failing on small problems, building the pattern recognition that makes a senior developer dangerous — those people went into other fields, or gave up, or are doing something else.

What started as a tectonic shift forward could turn out to be a slow-down in the long run. Not because AI wasn't useful. Because the humans who should have been learning alongside it weren't given the chance.


So What Do You Actually Do?

If you're a junior developer reading this, I'm not going to tell you it's fine. It's not fine. The market is genuinely hard right now, and the advice being handed out — "just learn AI tools," "prompt engineering is the future," "build your personal brand" — is mostly noise.

Here's what I actually believe:

If you're wondering what this looks like day-to-day — which fundamentals actually matter, how to use AI without becoming dependent on it, why boring tech is a career asset — I wrote about all of that in After the Panic: A Note for Junior Engineers. The short version: pay attention, don't panic, learn the thing underneath, choose boring tech, use AI as a lever instead of a crutch.

What I want to add here, specifically because this post is about the pipeline problem rather than the individual one:

Build things nobody asked you to. The job market doesn't have patience for "potential" right now. The most effective thing you can do is demonstrate curiosity in concrete form — a side project, an open source contribution, a written breakdown of something you investigated. Not for the resume. Because the act of building something real, for no external reason, is what keeps the curiosity alive when the market is telling you to give up.

Find out which 10% you're in. Not to categorize yourself permanently — but to be honest with yourself about whether the thing that drives you is the craft, or the outcome. Both are legitimate answers. But they lead to different decisions.


The Uncomfortable Ending

The industry created a generation of developers who were trained to type code rather than understand it. The education system before that created students who were trained to produce correct answers rather than ask good questions. Companies hired them because it was economical, then discarded them when something cheaper came along.

Some of those people would have become exceptional engineers, given time. We'll never know, because the time wasn't given.

The companies now racing toward an AI-only future are making a version of the same bet again: that the tool is enough, that the human pipeline is a cost center rather than a source of knowledge, that the short-term optimization is the right one.

Maybe they're right. Maybe the technology gets there.

Or maybe, in ten years, someone will be asking who's going to fix the mess the AI made — and there won't be enough senior developers left who remember how.

I've got a feeling I'll be busy.

Top comments (0)