Your leadership thinks you'll code in 3 seconds. Read and understand it all. Push to production without breaking stride. And never forget anything. They've never watched a junior engineer prompt through complexity he should have wrestled with. Never seen a senior freeze when the AI suggestion doesn't match the pattern she knows is right. Never been in the room when the thing that "should just work" ... doesn't.
The cargo cult is real. AI-driven. 10x. The words arrive from people who don't touch the system. They sound like strategy. They function as pressure.
What's actually happening is simpler. Your engineers are learning two workflows now. The one with AI assistance. And the one where they have to understand what the AI produced well enough to evaluate it. That's not 10x. That's overhead.
The gap between promise and reality doesn't shrink when leadership repeats the promise louder. It just makes the reality harder to name.
What 500 Engineers Described
Around the same time I started noticing this pattern on my own team, I was reading a thread that had grown past five hundred responses from experienced engineers describing what happened at their companies after executive mandates for AI adoption. The volume alone was striking. What stayed with me was the texture of the frustration.
Nobody was complaining about the tools. The tools worked.
The frustration was more specific than that. Engineers described sprint planning sessions where leadership talked about AI-driven velocity while the team was debugging something AI-generated last week that nobody fully understood before it shipped. They described the cognitive load of learning two workflows ... the one where AI helps you move faster through problems you understand, and the one where you have to evaluate AI output for problems you're still figuring out.
The gap between what executives imagine and what engineers deliver ... that's the work. And it's not going away.
The engineers who felt this most acutely were often the best ones. The senior engineers who could spot when an AI suggestion didn't match the architectural pattern the team had agreed on. The ones who knew that "technically valid" and "consistent with our standards" are two different bars, and that AI hits the first reliably while missing the second constantly.
Companies were measuring the spread of the tool. Nobody had built a way to measure what was happening to the engineers using it. The dashboard said adoption was up. The engineers knew what they were actually shipping.
When We Rolled Out AI Without Guardrails
I've lived inside this gap myself. Not as an executive repeating promises, but as a leader who gave the team powerful tools without enough structure to use them well.
We rolled out AI coding tools six months ago. No mandate, which was good. But also no guardrails, which was not. I told the team to explore. I told them to see what was possible. What I didn't do was give them a shared definition of what "good" looked like when AI was involved in the production of the code.
PRs got bigger. Review times stayed flat. And somewhere in week six, I started noticing the same architectural decision solved three different ways across three different services. Try/catch blocks blanketing everything in one service. A custom logging wrapper nobody else knew existed in another. Home-built retry logic in a third that duplicated what was sitting two directories over.
All technically valid. All completely inconsistent. All generated, at least in part, by the same AI tools we had just given the team.
I called it a junk drawer with a CI/CD pipeline. The description was accurate. And so was the part I didn't want to admit yet ... I had helped create the conditions where AI could scale our inconsistency faster than our coherence.
When a tool that generates code all day is working on a codebase that never agreed on anything, it scales inconsistency instead of coherence.
My first instinct was to fix the AI. Tighter prompt guidelines. Review checkpoints before AI-generated code could make it into a PR. But pulling up the git history showed me the inconsistency predated our AI rollout by two years. The AI hadn't created any of this. It had just started moving faster than we had.
The fix had to happen at the human layer before we touched anything else. We stopped. Documented patterns. Wrote decision records. Built architectural tests that failed the build when someone drifted from the blessed path. Only then did we build AI guardrails on top of standards the humans had already agreed on.
The junk drawer cleaned up within weeks once the constraints were real. But that's not the lesson. The lesson is that it took six weeks of quiet chaos before anyone asked whether the system could still be understood.
What the Gap Actually Looks Like
The cargo cult language ... AI-driven, 10x, transformation ... it travels from people who don't touch the system to people who do. And it carries an implicit accusation: if you're not shipping 10x, you're not using the tool right.
But the engineers know. They're sitting in sprint planning hearing about AI-driven velocity while they're debugging something the AI-generated last week that nobody fully understood before it shipped. They're watching junior engineers prompt through complexity they should have wrestled with, shipping code that compiles but that they can't explain. They're seeing senior engineers freeze when the AI suggestion doesn't match the pattern they know is right, unsure whether to trust their own judgment or the tool's output.
Your team knows. They're sitting in sprint planning hearing about AI-driven velocity while they're debugging something the AI-generated last week.
The spark is leaving before the code breaks. First the engagement drops. People start pulling back. Then the quality of the thinking shifts ... problem-solving gets thinner, tradeoffs get accepted faster, the energy behind review comments changes. Engineers who used to wrestle with hard decisions start giving quicker, flatter responses.
The code still compiles. Tests still pass. Shipping continues. Nothing looks broken on the dashboard. But something has already shifted in the room.
What We Did Instead
When I introduced AI to my team at Converse, I didn't send a mandate. I gave them two weeks of blocked time to explore the tools without meetings or delivery pressure. No targets. No adoption metrics. Just space to figure out what worked and what didn't.
My tech lead started a weekly call where the team shared what worked, what didn't, and what surprised them. I wasn't in every session because the point wasn't control. The point was shared curiosity.
I walked the team through how AI changed my own workflow, including what I tried, what surprised me, and what I still don't trust it with. Leadership by visibility, not by slogan.
The team now uses AI heavily, and they still enjoy their work because adoption was driven by curiosity rather than compliance. High AI usage and healthy engineering culture are not in conflict when leadership gets the rollout right.
But that required me to stop repeating the executive promise and start acknowledging the reality. AI doesn't 10x your team. It multiplies what you already have. If your foundation is solid, AI accelerates good work. If your foundation is inconsistent, AI accelerates the drift.
The Questions That Actually Matter
The gap between promise and reality doesn't shrink when leadership repeats the promise louder. It just makes the reality harder to name. And the cost of not naming it is paid by the engineers who are learning two workflows while being measured as if they've already mastered one.
Ask your execs when they last reviewed a pull request. Ask when they last sat with an engineer who explained why the AI output wasn't quite right. Ask what 10x actually looks like in practice.
The gap between what they imagine and what you deliver ... that's the work. And it's not going away.
Your team knows. Ask them.
That's something.
One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.
Top comments (0)