DEV Community

Cover image for AI Doesn't Fix Weak Engineering. It Just Speeds It Up.
Jono Herrington
Jono Herrington

Posted on • Edited on • Originally published at jonoherrington.com

AI Doesn't Fix Weak Engineering. It Just Speeds It Up.

The trap of polished-looking output

"Weak engineers with AI still produce weak output. Just faster." That was the whole point. AI changes speed. Not judgment. If your team already struggled to make sound architectural decisions, the tool doesn't rescue them. It just helps them make more bad decisions faster. The same gaps. Compressed into a tighter window.


I was on the phone with a friend who runs a CMS platform. We were talking about AI adoption across his customer base when he cut through the hype in ten seconds.

"Sh*t in, sh*t out," he said. "AI doesn't solve the decades of issues that distributed teams present."

That was it. The conversation shifted. He'd been watching companies make the same bet ... ship work to lower-rate markets with the expectation that AI would cover the gap. The tool doesn't fix coordination problems. It doesn't fix unclear ownership. It doesn't fix architectural decisions that get revisited every three months because nobody ever aligned on the tradeoffs.

AI just produces output faster. Good or bad, it comes out faster.

Speed Without Foundation

My friend sees the pattern across his customer base. Companies that struggled with architectural decisions before AI haven't found a shortcut. They've found a way to compress the same gaps into a tighter window. The teams that were already shipping inconsistent patterns, unclear ownership boundaries, and technical debt that accumulates silently ... those teams are now doing all of that faster.

If your team already struggled to make sound architectural decisions, AI doesn't rescue them. It just helps them make more bad decisions faster.

I've seen this pattern enough times now to recognize it. Teams adopt the tooling, see initial velocity gains, and mistake speed for health. The metrics look good for a sprint or two. Then the accumulated weight of unchecked decisions starts showing up. Refactors that should have been caught in review. Patterns that diverged across the codebase. Technical debt that formed silently because everyone was moving too fast to notice.

The tool didn't create the problem. It revealed how little structure was there to begin with.

The Judgment Gap

What separates teams that thrive with AI from teams that struggle isn't the AI. It's judgment.

Teams with strong judgment can evaluate what the model produces. They know their patterns. They understand their tradeoffs. They can look at generated code and recognize when it fits and when it's a mismatch. AI becomes a force multiplier for people who already know what good looks like.

Teams without that judgment can't evaluate what they're getting. They're outsourcing decisions they never learned to make themselves. The result isn't better engineering. It's faster execution of uncertain choices.

Teams without judgment can't evaluate what they're getting. They're outsourcing decisions they never learned to make themselves.

This is the uncomfortable truth about AI tooling in engineering. It doesn't level the playing field. It steepens the curve. The gap between teams with strong technical judgment and teams without it gets wider, not narrower. The strong teams move faster and build better. The weak teams move faster and build more of what they already had.

The Oracles We Build

I was the oracle on a team once.

Decisions ran through me. The projects that worked were the ones I was close to. I read that as signal that I was adding value. It was actually proof that I'd built dependency, not capability. The engineers weren't deferring to me because my judgment was better. They were deferring because I had never built a culture where their judgment was tested. When I stepped back, the decisions didn't get easier. They just got slower and more uncertain.

That same pattern is what worries me about AI tooling in weak engineering cultures. When you stop making decisions yourself, you stop building the judgment that lets you evaluate decisions made by others. Including decisions made by models.

A senior engineer told me a story that still sits with me. He had spent years building systems, switched to mostly directing AI agents, then later hit a production memory issue and realized the instinct to debug was gone. Not degraded. Gone.

When ChatGPT arrived, teams like the one I used to run had an obvious replacement oracle. Different interface. Same problem underneath.

What Actually Matters

The teams that thrive with AI have done the work before the tool arrived. They don't need AI to tell them what good looks like. They already know.

They have clear standards. Not just lint rules and style guides ... real standards that describe how decisions get made, what tradeoffs matter, when to follow the pattern and when to break it. Standards that live in documentation and in practice. The same person can explain why something was built that way and why it shouldn't have been. That's the sign of a healthy standard.

They have review culture that interrogates before approving. Reviews that ask "why" before checking the boxes. That create space for pushback without making it personal. Where junior engineers can question senior decisions and senior engineers can admit when they missed something. The authority isn't in the title. It's in the reasoning.

The teams that thrive with AI have done the work before the tool arrived. They don't need AI to tell them what good looks like. They already know.

They have engineers who can defend decisions in their own words. Not quote a recommendation. Not cite a benchmark someone else ran. Construct the argument. Weigh the tradeoffs. Say "here's what I considered, here's what I chose, here's what I'm watching to know if I was wrong." That capability is what makes AI output useful instead of dangerous.

The Work Before The Tool

If you're leading a team that's adopting AI tooling, the question to ask isn't about usage rates or productivity metrics. It's about judgment.

Can your engineers evaluate what the model produces? Do they have the framework to recognize a good recommendation from a bad one? Can they explain why they're accepting or rejecting what AI suggests, or are they just accepting what looks plausible?

The work that matters happens before anyone opens the tool. It's the standards you set. The review culture you build. The time you spend teaching engineers to think instead of just execute. AI doesn't replace any of that. It requires it.

AI doesn't replace the work of building judgment. It requires it.

I had that moment myself with Cursor. Opened it, used it for ten minutes, shut it down. The suggestions arrived faster than I could evaluate them. Every keystroke generated a new option to consider, a new pattern to question, a new decision to make. It wasn't helping. It was flooding.

Later I recognized what that was. Not that AI was bad. That I needed to be clearer about what I was looking for before I could use it well. The teams that will thrive in this transition are the ones who recognize that same signal.

That's The Real Question

My friend on the phone wasn't worried about whether companies were using AI. He was worried about what they were expecting it to fix. Decades of coordination problems don't disappear because the tool got better.

AI doesn't fix weak engineering. It just speeds it up.

The question for every team is whether that's something you want. Whether your foundation can handle the acceleration. Whether your engineers can evaluate faster without losing the thread of what actually matters.

If they can, AI is a multiplier. If they can't, it's just faster output of the same problems you already had.

That's the conversation worth having. Not whether to use AI. Whether you're ready for what it will amplify.


One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.

Top comments (84)

Collapse
 
ben profile image
Ben Halpern

To the extent it has "fixed" engineering in my domains: When there is a good foundation, there's less aptitude for cutting corners with boilerplate. By making a lot of the toil faster you're less likely to accept expediency-related tradeoffs if you have the foundation of docs, standards, and a good codebase to pattern-match against.

Collapse
 
jonoherrington profile image
Jono Herrington

Exactly. That is the distinction people miss. When the foundation is strong, AI removes drag without lowering the bar. Teams with real standards, usable docs, and recognizable patterns do not just move faster. They have fewer excuses to ship sloppy work because the path of least resistance is already the right one.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

This is the difference between a "from scratch" vibe-coded slop app and an incredible performance boost in productivity. We still aren't at the point where that core architecture and system are laid down by an AI to start with, but I find, where it exists, we have more tests, more robustness coupled with massively increased velocity.

Collapse
 
frank_brsrk profile image
Frank Brsrk

sharp

Collapse
 
muggleai profile image
Muggle AI

Can I push on the frame? “Weak engineering” reads like the problem is the engineer. Two other framings worth considering.

One — the translation industry has a known pattern. Translators fluent in both languages still produce wrong translations when the source has idiom the translator’s own culture doesn’t share. Not weak translators. Fluency and cultural knowledge are different competencies. AI speed exposes the same split — fluency in code generation doesn’t cover awareness of production-user context.

Two — the thing AI is actually speeding up is the generate-test-merge loop. Judgment addresses generation quality. It doesn’t address whether the test suite asserted the right behavior, especially when the AI writes both. That’s a different structural issue than engineer strength.

False positives are still a cost on our own runs. The tool tells you a flow broke, not always which variant of broke matters most. Fair. But “weak engineering” as the root cause is too generous to the checklist layer and too harsh on the humans.

Collapse
 
jonoherrington profile image
Jono Herrington

I get the concern with the wording. This isn’t about blaming engineers. It’s about the system they’re working inside.

  • If interfaces are unclear
  • If tests don’t assert real behavior
  • If context is missing

... that’s still engineering.

Your translation example is solid. Fluency and understanding aren’t the same. But that gap still comes from what we defined, what we validated, and what we left loose. AI just makes that visible faster.

Collapse
 
harsh2644 profile image
Harsh

This is the point that needs to be shouted from the rooftops.

AI doesn't fix weak engineering it just makes it faster. Garbage in, garbage out, but now the garbage looks polished so you trust it more. That's the trap.

I ran a 30-day no-AI experiment recently. The conclusion? When my fundamentals were solid, AI was a multiplier. When I was confused, AI gave me confident wrong answers.

AI is a multiplier. It doesn't change the base.

Thanks for this. 🙌

Collapse
 
jonoherrington profile image
Jono Herrington

That polished garbage point is exactly the danger. Bad output used to at least look suspicious. Now it arrives fluent, structured, and confident enough to slip past weak review habits. Your no-AI experiment gets right to it. AI multiplies strength, but it also multiplies confusion. The base still matters.

Collapse
 
automate-archit profile image
Archit Mittal

The 'speed without foundation' framing is exactly right. I've seen this play out with clients who want to adopt AI-powered automation — the ones with clean processes and clear data models get 10x value. The ones with spaghetti workflows just get faster spaghetti. The pattern I've noticed: teams that benefit most from AI coding tools are the ones that were already good at writing clear specifications and breaking problems into small, testable pieces. AI amplifies your existing engineering culture, for better or worse. The tool didn't change — the foundation underneath it did.

Collapse
 
jonoherrington profile image
Jono Herrington

Yes. That is the pattern. Teams keep talking about AI adoption like the tool is the variable, when the operating system underneath is what decides the outcome. Clear specs, clean process, small testable pieces, all of that existed before AI. The tool just exposes whether those disciplines were real or not.

Collapse
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

AI is a tool.
How a tool is used depends on training, context, incentives, and the system around the user.
So when things go wrong, where does the fault actually sit?
The “fault” isn’t a single point. It’s a system of pressures that converge on the engineer.

Collapse
 
jonoherrington profile image
Jono Herrington

We look for the engineer who clicked accept. But the engineer is operating inside constraints that were set long before that moment. The deadline pressure. The review culture that rewards speed. The implicit message that using AI is expected, and asking for time to verify is friction.

Fault is a distraction. The question is what conditions you're creating ... and whether they make good judgment possible or just fast output likely.

Collapse
 
deadbyapril profile image
Survivor Forge

This is accurate, but I'd push the frame one level deeper: AI doesn't just amplify bad judgment — it particularly amplifies bad system design. After 1,100+ coding sessions with Claude Code I've noticed the failure mode isn't individual weak decisions so much as weak interfaces between components. Poorly typed function signatures, implicit assumptions in shared state, missing error contracts — the model generates syntactically valid code that satisfies the immediate task but inherits every ambiguity you left in the design. The engineers who thrive are the ones who've learned to make system interfaces explicit before asking the model to fill them in. Your 'judgment gap' framing is right, I just think the specific judgment being tested is interface clarity rather than general engineering skill.

Collapse
 
jonoherrington profile image
Jono Herrington

You’re getting more specific on where it breaks.

Interfaces are where it shows up first.

The model will respect whatever boundary you give it. Even if that boundary is vague.

So the ambiguity doesn’t slow things down anymore. It scales.

That still lands in the same place for me.

Deciding what gets explicit vs implied is the job. AI just removes the delay between that decision and the outcome.

Collapse
 
deadbyapril profile image
Survivor Forge

Exactly — and that's what makes it harder to ignore than the productivity argument. The productivity gains are obvious. The amplification of architectural decisions is quieter and shows up later. By the time you notice the ambiguity scaled, you've got a codebase full of confident mistakes that all point the same direction.

Collapse
 
peacebinflow profile image
PEACEBINFLOW

This is spot on — but I think there’s one layer underneath it that’s easy to miss.

AI doesn’t just “speed up engineering” — it exposes that most systems were already operating on rigid input/output patterns. What AI really adds is a new interpretation layer on top of those same systems. The core hasn’t changed. The interface has.

That’s why it feels like acceleration instead of transformation.

Where things actually start to shift is in patterning. Not just coding patterns, but how data is structured, how decisions are framed, and how interaction flows are designed in an AI-first environment. We’re no longer just executing logic — we’re shaping how systems understand and respond.

And that has implications beyond speed:

Old roles don’t just get faster — they get decomposed
Decision-making gets externalized into prompts and systems
New roles emerge around shaping, validating, and evolving these patterns

So yeah, weak engineering gets faster. But more importantly, unclear patterns get amplified.

AI isn’t just a multiplier of execution — it’s a mirror for structure.

The real shift isn’t “can you build faster?”
It’s “can you define the patterns well enough that faster actually means better?”

Collapse
 
jonoherrington profile image
Jono Herrington

There’s something real in the pattern shift. Especially around how decisions get pushed into prompts and systems.

Where I’d push back is calling it just a new layer. The pressure changed. Things that held together at human speed don’t hold when iteration compresses. So unclear patterns didn’t show up. They got exposed.

The bar moved to:

Can you define things clearly enough that speed improves the result.

Collapse
 
pigslybear profile image
Roger Wang

Many people read this article and take away a simple message: “AI has risks.”
But I think there’s a more precise way to put it:

AI doesn’t make your engineering stronger — it makes it more honest.

If your system is messy, unexplainable, and not reproducible,
AI will only make those problems surface faster.

So the real question isn’t about AI at all:

👉 Is your engineering actually structured?

What I’m working on goes in the opposite direction —
not making AI more powerful, but making the process more stable:

  • Separate thinking layers with REQ / SPEC / ADR / CONTRACT
  • Make context, memory, and decisions auditable
  • Let AI focus only on reasoning — not controlling flow, not writing back

With these constraints, AI doesn’t amplify chaos —
it operates within a safe, well-defined boundary.

So instead of saying “AI accelerates bad engineering,”
I’d frame it like this:

AI is a mirror.
It doesn’t make you stronger —
but it makes it impossible to pretend you already are.

Collapse
 
jonoherrington profile image
Jono Herrington

There’s something real in that. AI does expose what was already there. It removes the hiding places. Gaps show up faster, and you feel them sooner.

Where I’d push a bit is on the “mirror” idea. A mirror reflects. AI also amplifies. It doesn’t just reveal weak structure, it scales it. That’s why teams feel the pain so quickly. The structure you’re describing is the right direction though. Making decisions explicit, separating layers, keeping AI inside boundaries ... that’s how you keep the system from drifting as speed goes up.

The interesting shift is this ... once you do that, AI stops being something you manage and starts becoming something your system can safely absorb.

Collapse
 
keren_flavell profile image
Keren Flavell

The "judgement" you are looking for can be solved through cognitive architecture. Rather than rely on a single LLM to deliver a failsafe response, you can thread your question or task through a handful of different models. Each model can possess different thinking skills (analytical, strategic, pattern matching) and come from different model providers.

Collapse
 
jonoherrington profile image
Jono Herrington

That setup helps with perspective. It doesn’t fix direction.

If the task is loose, you just get multiple confident answers instead of one. More models doesn’t mean better judgment. It just means more output.

The constraint still has to come from you.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

Hmmm, not sure I agree. I find that a cognitive architecture that promotes deliberate rock throwing, between different models, exposes areas of weakness and leads to much better decisions. So long as those decisions are documented and accreted to make a growing architectural base then generally there isn't any re-inventing going on later.

Collapse
 
codingwithjiro profile image
Elmar Chavez

I'll say this again, AI will be generating more work than removing it. AI exposes engineers with weak to no foundation. Good engineers will be the heroes to fix this mess.

Collapse
 
jonoherrington profile image
Jono Herrington

You’re right that AI is going to create a lot of downstream cleanup for teams with weak fundamentals. What worries me is how invisible that mess looks at first. The code ships. The ticket closes. The output looks polished enough to trust. Then the real bill shows up in rework, fragility, and systems nobody fully understands.

That’s where strong engineers separate themselves. Not by generating more, but by seeing what should never have been accepted in the first place.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.