Every engineering team I talk to is using AI. Nobody's debating that anymore.
What nobody's talking about is that every developer on the same team is using it differently.
The Problem Nobody Admits
Ten developers. Ten different setups. Ten different rules files. Ten different prompting habits. Same codebase, same tickets, same sprint - ten different approaches to how AI touches the code.
One dev has a meticulous CLAUDE.md with architecture decisions, testing conventions, and code style rules. The dev next to them has nothing - just vibes and a blank context window. Both ship code. Both pass review. The inconsistency is invisible until it isn't.
AI didn't create this problem. It amplified what was already broken: the lack of shared engineering standards that actually get enforced.
Three Things That Break
No visibility. You don't know how your team uses AI. Who follows best practices. Who doesn't. What gets reviewed. What ships blind. Engineering managers have dashboards for deployment frequency, test coverage, PR cycle time - but zero visibility into how AI is shaping the code their team writes every day.
No consistency. Same ticket, ten different approaches. One dev asks the AI to write tests first. Another skips tests entirely and asks for the implementation. A third gives the AI the full architectural context. A fourth gives it nothing. The output varies wildly - not because the AI is inconsistent, but because the humans are.
And this isn't just about different code styles. It's about where in the codebase the change happens, what the scope should be, why this approach and not another. There are a thousand ways to solve the same problem. The AI will confidently execute any of them. Without a shared standard for how your team makes these decisions, every AI-generated PR becomes a review burden. Someone has to undo, redo, or reshape work that was technically correct but architecturally wrong. That's rework at scale, and it compounds fast.
Knowledge walks out. When a developer leaves, their context, patterns, and shortcuts leave with them. The rules they built, the prompts they refined, the architectural decisions they encoded in their local setup - gone. The next hire starts from zero. This was already a problem before AI. Now it's worse, because the amount of implicit knowledge in a developer's AI setup is massive and completely undocumented.
Why This Is Harder Than It Looks
The obvious reaction is "just create a shared rules file." That's where most teams start. It's also where most teams stop.
A shared rules file in a repo solves the format problem but not the enforcement problem. Nobody checks if developers actually use it. Nobody knows if it's up to date. Nobody knows if the dev who joined last month even knows it exists.
And rules are just the beginning. What about which models are approved? What about which phases of work should use AI and which shouldn't? What about cost? What about the architectural decisions that should constrain how AI generates code in your specific codebase?
Standardizing AI in an engineering team isn't a file. It's a system. And most teams don't have one.
What Actually Needs to Happen
I think about this in three layers:
Layer 1: Shared context. Every developer working on the same codebase should start with the same foundational context. Architecture decisions, code conventions, testing strategy, dependency rules - this isn't optional, and it shouldn't depend on individual setup. If your ADRs exist but only two people know where they are, they don't exist.
Layer 2: Guardrails. Not everything should be delegated to AI. Some decisions require human judgment. Some code paths are too critical for unsupervised generation. The team needs to define where AI adds value and where it adds risk - and enforce that distinction, not just document it.
Layer 3: Visibility. You need to know what's happening. Not surveillance - signal. Which parts of the codebase are AI-touched. What patterns are emerging. Where the inconsistencies are. Without this, you're managing a process you can't see.
Most teams have none of these layers. Some have a partial Layer 1. Almost nobody has Layer 2 or 3.
The Real Competitive Advantage
Here's what I think most people miss: the competitive advantage of AI in engineering isn't speed. Speed is table stakes. Everyone gets faster.
The advantage is in how consistently and reliably you use it across the entire team. The team that ships fast with shared standards will outperform the team that ships fast with ten different approaches - because the second team is accumulating invisible inconsistency that compounds over time.
Faster chaos is still chaos. It just takes longer to notice.
This Isn't About Tools
I'm not pitching a solution here. I'm describing a problem I see everywhere and that I think is underexplored.
The industry spent the last two years talking about adopting AI. The next conversation needs to be about governing it. Not in a bureaucratic, compliance-heavy way - in a practical, engineering-first way. Shared context, clear guardrails, basic visibility.
Every team that adopted AI without standardizing it is running an experiment with no controls. Some of those experiments will work out fine. Some won't. The ones that don't will be very expensive to fix - because the technical debt from inconsistent AI usage is invisible until it's systemic.
The question isn't whether your team should use AI. It's whether they should all use it the same way.
I think the answer is obvious.
Top comments (0)