DEV Community

Cover image for AI Didn't Break Your Culture. It Exposed It.

AI Didn't Break Your Culture. It Exposed It.

Jono Herrington on March 25, 2026

An engineer pushes back on a decision. The response: "ChatGPT recommended something else." The tell isn't the recommendation. It's that they reache...
Collapse
 
itskondrat profile image
Mykola Kondratiuk

"reached for an oracle instead of an argument" - that framing is exactly right. i've seen the same dynamic play out when AI gets introduced to a team that already had low psychological safety. suddenly everyone's hiding behind "the model says" instead of owning a position. the tool just makes the pre-existing avoidance behavior more visible and more frequent. the teams that use AI well are usually the ones where people were already comfortable being wrong.

Collapse
 
jonoherrington profile image
Jono Herrington

Yes. Low safety turns AI into cover.

Once people learn they can hide behind the tool, the model becomes a shield for avoidance instead of a tool for better thinking. One of the fastest signals is hearing what the model said before hearing what the engineer thinks.

Collapse
 
itskondrat profile image
Mykola Kondratiuk

That last signal is such a good one. When "the model said" comes before any personal reasoning, it's often a sign the human already exited the conversation.

Collapse
 
pjdeveloper896 profile image
Prasoon Jadon

This is one of the clearest takes I’ve read on AI and engineering culture.

What really hit me is the idea that “the oracle was always there.” Swapping Stack Overflow or a senior dev for ChatGPT doesn’t change the behavior—it just removes the friction and exposes how little independent reasoning was happening in the first place.

The line that stuck with me most: “teams learn to cite sources instead of building judgment.” That feels uncomfortably true, not just in engineering but in how we learn anything today.

Also appreciate the accountability here. It’s easy to blame tools, but this reframes it as a leadership and culture design problem:

  • Do engineers actually understand why decisions are made?
  • Can they defend tradeoffs without leaning on authority?
  • Is “why” encouraged—or quietly punished?

AI as a force multiplier vs. a junk drawer generator is such a sharp distinction. Same tool, completely different outcome depending on whether a thinking culture already exists.

Honestly, this feels less like an AI post and more like a blueprint for building real engineering judgment.

Great piece, Jono Herrington.

Collapse
 
jonoherrington profile image
Jono Herrington

Thanks! That friction point matters a lot.

Before AI, borrowed thinking at least had some delay built into it. You had to go search, ask around, or wait on someone senior. Now weak reasoning can move at the same speed as strong reasoning. That raises the value of teams that teach people to construct an argument in their own words.

Collapse
 
pjdeveloper896 profile image
Prasoon Jadon

true

Collapse
 
annavi11arrea1 profile image
Anna Villarreal

"Reasoning, not rules" --epic! A mantra that allows for growth 💯

Collapse
 
theeagle profile image
Victor Okefie

"The teams that win won't be the ones with the best AI policies. They'll be the ones who built the culture where engineers can defend a decision in their own words before the oracle arrived." That's the line. AI didn't create the dependency it just made it faster and more quotable.

Collapse
 
jonoherrington profile image
Jono Herrington

Exactly. The dangerous part is when teams stop treating reasoning as part of the work and start treating it as a lookup problem. Once that happens, AI is not the cause. It is just the fastest possible delivery mechanism for a habit that was already there.

Collapse
 
shivanim21_ profile image
Shivani

Spot on, Jono—this hits hard because I've seen it play out in real client teams, especially in data-heavy environments like Databricks migrations or Shopify API integrations.

We consulted for an e-commerce client last year: solid engineers, cranking out Spark jobs and webhook automations. But when a junior pushed back on a Lakehouse schema choice, the lead's retort? "ChatGPT says columnar is always faster." No tradeoff discussion, no mention of their read-heavy BI workloads where row-oriented won out. It exposed the gap: no ADRs documenting why we'd picked Delta Lake patterns before. The "oracle" shortcut killed inquiry.

What flipped it? We ran a quick "reasoning retro": engineers rewrote one decision as "What we weighed (cost, perf, scale), what we ruled out (why Parquet alone failed), what we're monitoring (query latency post-load)." Shared it in their repo. Next review, AI recs became starting points, not stoppers. Judgment leveled up fast.

Collapse
 
jonoherrington profile image
Jono Herrington

That is exactly the pattern. Once the recommendation becomes the argument, the team has already given away the hard part. I like the retro move because it puts reasoning back in the repo where other engineers can inherit it instead of borrowing confidence from the loudest source in the room.

Collapse
 
jack_taylor_70727835e44e9 profile image
Jack Taylor

This resonates a lot. I’ve seen teams replace one “oracle” with another without realizing the underlying issue never changed.

Before AI, it was the most senior engineer. Then it became blog posts, conference talks, “Google says…”. Now it’s AI. Same pattern—outsourcing judgment instead of building it.

The point about decision frameworks really stands out. In teams where we had clear reasoning documented (why we chose something, what we rejected, what we were watching), AI became genuinely useful. In teams without that, it just amplified inconsistency.

Curious—have you seen effective ways to teach that kind of judgment at scale, especially for mid-level engineers?

Collapse
 
jonoherrington profile image
Jono Herrington

Exactly. The oracle changed. The dependency didn’t. What teaches judgment is exposing tradeoffs early … what we chose, what we rejected, and what would make us change course.

Mid level engineers grow fastest when they are pulled into real decisions, not just handed conclusions.

Collapse
 
jack_taylor_70727835e44e9 profile image
Jack Taylor

That makes a lot of sense—especially the idea of exposing tradeoffs early instead of presenting conclusions.

I’ve seen something similar: when engineers are only given decisions after the fact, they optimize for execution. But when they’re involved in the “why” (what we rejected, what might break, what would make us revisit), they start building real judgment.

The hard part I’ve noticed is that it requires slowing down a bit upfront, which many teams resist under delivery pressure.

Have you found ways to introduce that without impacting delivery too much? For example, lightweight ADRs or design reviews?

Thread Thread
 
jonoherrington profile image
Jono Herrington

Yes ... lightweight ADRs and focused design reviews have been the best balance for me.

I wrote up the ADR side of that here in case it is useful. The goal is not more documentation. It is making tradeoffs, rejected options, and revisit conditions visible without turning the process into ceremony.

jonoherrington.com/blog/how-to-wri...

Thread Thread
 
jack_taylor_70727835e44e9 profile image
Jack Taylor

This is a great framing—especially the idea that the goal isn’t more documentation, but making tradeoffs and revisit conditions visible.

That’s something I’ve seen missing in a lot of teams. Decisions get recorded, but the “why” and especially the “what would make us change this later” rarely do.

I like how you’re keeping it lightweight as well. In my experience, once it starts feeling like process or ceremony, people stop engaging with it.

Going to take a closer look at your write-up—curious how you structure those ADRs in practice to keep that balance.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

The oracle framing is the sharpest part of this.
Stack Overflow was an oracle. The senior engineer who'd been around longest was an oracle. Conference talks from FAANG engineers with entirely different constraints — oracle. The source kept changing. The dependency never did.
What AI did is make the pattern undeniable. You can't blame ChatGPT for a culture that was already outsourcing judgment. It just removed the friction that was hiding it.
The "junk drawer with a CI/CD pipeline" line is the one I keep coming back to. I've written about the token economy problem in agent systems — the gap between what gets measured and what actually compounds as debt. Same structure here. Teams optimize for output because output is legible. Culture isn't. The dashboard looks clean. The codebase tells the real story.
The question I'd push on: at what point does the oracle become load-bearing? The senior engineer who has all the answers isn't just a dependency — the team has stopped building the muscle to operate without them. When the oracle leaves, or gets replaced by a model, you don't just lose the answers. You find out the reasoning was never in the room.
That's the part most teams don't see coming.

Collapse
 
jonoherrington profile image
Jono Herrington

That is the part that gets expensive fast. Once the oracle becomes foundational, the team starts confusing access to answers with actual capability. Everything looks fine right up until the oracle is gone, and then you realize nobody built transfer, only dependency.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

Confusing access to answers with actual capability.

That's the line. The oracle doesn't just answer questions, it quietly becomes the reason nobody develops the judgment to answer them without it. The dependency is invisible until the transfer moment exposes it. By then it's too late to build what should have been built the whole time.

Collapse
 
varsha_ojha_5b45cb023937b profile image
Varsha Ojha

I’ve seen this happen.
Same tools, same AI, and completely different outcomes depending on the team.
That says a lot about the culture underneath.

Collapse
 
jonoherrington profile image
Jono Herrington

Yep. Same tool, different outcome is usually the giveaway. When one team gets leverage and another gets drift, the variable usually isn’t the model. It’s whether the team already had a shared way to think.

Collapse
 
aloisseckar profile image
Alois Sečkár

I agree with one "but" - if you don't have personal experience with a tool, you have to relly on someone else's opinion and trust his judgement instead of yours. Unless you want to trial-and-error every possible way, which is usually outside of the timeframe. And with the speed of software development, it is practically impossible to keep up. So you actually have to reach for "oracles" all the time.

Of course, "AI recommended" is not a valid reason itself. It is a summary that must be followed with a list of arguments why you agree that the recommendation makes sense to you.

Collapse
 
kuro_agent profile image
Kuro

Great diagnosis. Oracle dependency is real and predates LLMs.One dimension I'd add: interface friction matters. Stack Overflow forced you to read multiple answers, compare them, evaluate context — there was enough friction to accidentally build judgment. ChatGPT removes that friction entirely. One question, one confident answer, no comparison required.The fix isn't "use AI less" — it's changing the interface. A tool that shows you three conflicting approaches with tradeoffs forces engagement. A tool that gives you one answer enables exactly the abdication you're describing.I've been building a personal AI agent and found the same pattern at the system level: removing all constraints doesn't make the agent better, it makes it generic. The constraints that force reasoning are the ones that produce quality output.

Collapse
 
jonoherrington profile image
Jono Herrington

The interface point is real. Friction used to accidentally train discernment because people had to compare, filter, and translate. A single polished answer skips that workout. The risk is teams start confusing response quality with decision quality.

Collapse
 
poushwell profile image
Pavel Ishchin

yeah this feels underrated. one polished answer just feels done. i keep seeing people stop there instead of thinking things through. any way to change that without them just ignoring it

Collapse
 
frickingruvin profile image
Doug Wilson

I absolutely love this. Well said.

For me, this is the scene in Braveheart with the young William and his uncle Argyle.

First, learn to use ... this [taps William's head],
And then I'll teach you to use ... this [points to the sword].

Thinking is such an important skill (along with imagination) and still the sole province of human beings.

Collapse
 
jack799200 profile image
Jack

"The oracle was always there" hits hard. AI didn't introduce intellectual laziness, it just made it frictionless and 24/7. The real diagnostic question isn't what tool your team reaches for, but whether they can reason without one. Culture is built in the "why" questions, not the policy docs.

Collapse
 
jonoherrington profile image
Jono Herrington

Yes. A lot of teams think the risk shows up when someone uses the tool. It usually shows up earlier, when nobody asks them to explain the tradeoff in plain English. By the time policy gets involved, the culture already made its choice.

Collapse
 
klement_gunndu profile image
klement Gunndu

The oracle-as-dependency framing is sharp. The teams that struggle most aren't using AI badly — they never had a decision framework to begin with, and AI just made that gap visible faster.

Collapse
 
jonoherrington profile image
Jono Herrington

That is the part I keep coming back to. Teams do not build judgment by being given answers. They build it by being made to surface reasoning.

Once that habit is missing, the oracle can be a principal engineer, a conference talk, or ChatGPT. Different interface ... same dependency.