A junior engineer on my team pulled me aside recently. Not to ask for help. To share a concern. He told me he wasn't always sure he understood everything the AI was outputting for him. He'd been reading it, checking it, shipping it. But he couldn't reliably tell if it was right. He was learning, he said, by reading what the AI wrote.
I sat with that for a minute.
He's more self-aware than most. What he described is the default mode for junior engineers right now across teams everywhere, not just mine. They get access to AI tools. The output looks like code. It compiles. The basic tests pass. So they call that learning.
It's not. Reading AI output is like copying the theorem off the board without working the proof. The notation is right. The understanding isn't there. And the gap between those two things is invisible until production breaks.
The Multiplier Goes to Those Who Need It Least
A study tracking AI credit usage across engineering teams found that seniors were using the tools four to five times more than junior engineers. The engineers gaining the most from the multiplier were the ones who least needed the help.
The first read is that juniors need to be pushed toward adoption. But I don't think that's what's happening. The story is that seniors know what to ask for. They can evaluate the output the moment it appears. They use AI for boilerplate, test generation, migrations ... the parts of the work where their judgment is already baked in and they're buying back time on execution.
A senior engineer with AI is a fighter pilot with autopilot. The autopilot does a lot. But everything it does is evaluated by someone who has logged the hours. Someone who knows what a wrong answer looks like before the instruments catch it.
Give that same tool to someone who skipped the fundamentals and you get something that looks identical from the outside. Code gets written. PRs get opened. Features ship. Then production breaks at 2am and the engineer is googling their own code like it's someone else's crime scene.
The senior engineer with AI is a fighter pilot with autopilot. The junior without fundamentals is pressing every button that lights up.
The Vibe Coder Is Already in Your Code Review
The vibe coding conversation has been loud enough that most engineering leaders have an opinion on it. Someone who ships code they can't explain, chasing vibes instead of understanding. Most teams feel confident they don't have this problem.
I'd look again.
What my junior engineer described is structurally identical to vibe coding, it just arrives wearing the uniform of a developer workflow. AI-assisted output that the engineer learned from by reading rather than building. Code that functions in local environments but carries invisible load-bearing assumptions nobody interrogated. Engineers who can't tell you with confidence whether the output is right, because they never developed the instinct that fires when something is quietly wrong.
The vibe coder built nothing and shipped. The junior engineer with AI is building through the AI and shipping. The result looks different in the PR. Under real production load, at 2am, the difference gets harder to find.
The tell is in the question they ask when something breaks. A senior engineer asks where the failure is and why. A junior engineer who built entirely through AI asks what the AI gave them and how to fix it.
One is debugging. The other is customer service.
I Was the Skeptic First
I want to be honest about something.
My own chapter one with AI was resistance, not adoption. I wasn't the person who ran toward these tools immediately. I tested them skeptically. I failed with them. I had to build a real sense of when to trust the output and when to override it before I could use AI the way I use it now. That process ... the friction of it, the reps of being wrong and having to figure out why ... is what gave me the judgment I bring to every AI interaction today.
And it's not their fault. The system isn't asking them to take it. They're not being asked to be wrong and learn from it. They're being asked to ship. The tools arrive, the output appears, and the workflow moves forward whether or not the engineer could have written any of it themselves.
As a leader, it's my job to stay ahead of what the tools are creating, not just what they're enabling. And what they're creating right now is a generation of engineers learning by reading AI output and calling that the same thing as building.
My junior engineers are getting the tools without the journey. And the system isn't asking them to take it.
Does the Foundation Still Matter?
The counterargument I hear underneath all of this is whether it even matters anymore. The code ships. The product works. Maybe requiring engineers to understand what they're building is an older generation's concern dressed as wisdom.
I'd argue it matters more now, not less.
The FAA doesn't reduce manual flight hour requirements when autopilot improves. It maintains them, because what automation makes better at the margins is the same thing it makes more catastrophic when it fails. The pilot with the most manual hours is also the best at using autopilot, not because those hours create nostalgia for hand-flying, but because they build the pattern recognition that catches what autopilot misses.
Engineering is the same. AI makes gaps invisible. It papers over missing reps with correct-looking output. A senior engineer can see through the paper because they've been on the other side of that code, in production, under load, when the assumptions break. They know what the paper is hiding because they've written it.
A junior engineer who built primarily through AI doesn't have that. Not because they're less capable. Because they haven't had to be wrong in the right environments yet.
The Gap Is Already Running
My senior engineers are doing things right now that looked impossible eighteen months ago. I watch it and feel two things at once. Pride in what they're capable of producing. And a harder question I can't stop thinking about.
Their judgment ... the thing that makes them dangerous with the tool ... came from experiences that no longer happen the same way. Debugging at 11pm. Writing systems from scratch and watching them fail. Refactoring under pressure with nowhere to hide. Those aren't just memories. They're the reps that built the evaluation muscle they now bring to every AI output.
AI is doing those reps for the next generation of engineers.
The seniors' judgment came from debugging at 11pm and writing systems from scratch. AI is doing those reps for the next generation.
The juniors who flagged their own concerns to me are the ones I'm watching most carefully. Not because they're behind ... but because they already sense something is missing. That awareness is the beginning of the foundation. They're building it slower, but they're trying to build it.
The ones I worry about are the ones who don't notice the gap at all. Who ship with confidence because the output compiled and the tests passed and nothing has broken yet.
It will.
Multiplying zero is still zero. And the engineers watching their seniors use AI to do in an afternoon what used to take a week should be asking themselves a different question than how to use the tool.
They should be asking what their seniors built before the tool arrived.
One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.
Top comments (0)