I got a DM from a principal engineer last week. He's spending more than $2,000 a month on AI tokens. Not because he's lazy. Because he's figured something out. The tool isn't giving him 10x output. It's giving him exponential output. He's orchestrating agents, chaining workflows, building systems that compound.
I know a tech lead who turned his AI license back in. Said he doesn't want to use it. Wants to stay "pure."
I have an issue with that. Not because he's missing a productivity tool. Because he's missing the current.
This isn't about speed. It's about understanding how the work gets done now. If you're not using AI in 2026, you're not staying current. Full stop.
If you're not using AI in 2026, you're not staying current. Full stop.
No One Agrees How
These two are both senior leaders. Both technical. Both facing the same industry shift. And they're on opposite sides of the same decision.
That's the fragmentation. Nobody has figured it out yet. New parts of the tool come out every day. We're all learning together, and there is no AI expert. Just people who have spent more time with it and people who haven't.
Scale that up to a team. Six months in. Then you ask a different question. What does a good AI-assisted PR look like? And the room fractures.
Some engineers prompt everything and review nothing. Some use it for tests only, line by line, suspicious of anything else. Some copy-paste wholesale. Others cherry-pick like they're editing someone else's draft.
Six different answers. Six versions of "how we do AI." On the same team.
Six different answers. Six versions of "how we do AI." On the same team.
How We Did It Differently
When AI first rolled out, we locked everything down tight. GitHub Copilot. Microsoft Copilot. That was it. No Cursor, no Claude. We cut everything until we had policies in place, governance in place, and we weren't just giving our IP away to agents.
So I started writing markdown files with my questions and having Copilot review them. Worked around the system until I could prove value. Eventually automated quarterly planning and saved 280 hours.
But here's what I learned from that workaround. The tool wasn't the hard part. The hard part was that no one had defined what "good" looked like for AI-assisted work. We had access, but we didn't have standards. So everyone improvised.
Six months later, we had six different workflows. Six different levels of review. Six different definitions of when AI was appropriate and when it wasn't.
The dashboard showed adoption. Credit usage climbing. Active users rising.
What it didn't show was whether anyone could explain their approach to a new engineer. Whether the team was aligned on what success looked like. Whether we were building one coherent system or six parallel experiments that happened to share a codebase.
AI Multiplies Whatever Pattern Is Already There
The junior engineer doesn't know which pattern is blessed. They don't know what "good" looks like because no one wrote it down. So they pick the one that seems fastest, or the one their last mentor used, or the one the AI suggested first.
And the codebase fills with inconsistent patterns faster than any human could have produced them.
People like to say AI is non-deterministic. Same prompt, different outputs. Unpredictable.
I would argue humans are non-deterministic.
Ask an engineer the same question three times. Morning, afternoon, after a bad deploy. You'll get different answers depending on stress, sleep, what they ate, whether they just got out of a brutal stakeholder meeting.
We've always dealt with variability. The solution was never "stop working with humans." It was standards. Accountability. Clear definition of "good" that transcends individual mood.
Same solution for AI.
How We Did It Differently
My current team doesn't have six versions of "how we do AI."
We did the work first. Wrote down what good looks like. What good error handling looks like. How we structure state. When to abstract and when to duplicate. Standards you can articulate.
Then we built lint rules, architectural tests, AI workflows trained on those patterns.
The tool started from our standards. Not generic training data.
That's the difference. Six months in, we don't have six versions of "how we do AI." We have one version of "how we do engineering," and AI operates inside it.
We don't have six versions of "how we do AI." We have one version of "how we do engineering."
The Real Question
The question isn't whether your team is using AI.
It's whether they can explain their approach to someone who just joined yesterday. Whether there's a pattern. Whether the pattern is written down. Whether anyone with judgment has looked at it and said yes, this is what we want more of.
That's the work that happens before the tool. That's the work most teams skip.
Because it feels slower than just letting people figure it out. Because it doesn't show up on a dashboard. Because it requires someone to make a judgment call about what "good" looks like, and not everyone wants to be that person.
But six months from now, when you have six versions of "how we do AI" and a codebase no one fully understands, you'll wish you'd had the conversation before the tools made the mess bigger.
Your engineers are using AI. The question is whether they're using it well enough to teach someone else.
If the answer is no, the dashboard is lying to you.
One email a week from The Builder's Leader. The frameworks, the blind spots, and the conversations most leaders avoid. Subscribe for free.
Top comments (0)