Before the advent of AI, and honestly, that wasn't too far back, when a software engineer wrote code, there were a lot of decisions being made at each step.
How do you name variables? Which design pattern fits best? What keeps the linter happy, and why? (The principal engineers on your team set those rules. The wise druids, guiding the path to quality.). What design patterns didn't work before? What constraints exist? Maybe there is a third-party vendor tie-up, or an API with rate limiting. Maybe you learned a library comes with bloat. The team lead suggested a coding practice last week. Someone got paged at 3 AM three years ago because of this mistake, and you are avoiding that. The CI broke yesterday, so you are adding guardrails.
I could go on. By no means is this exhaustive.
But here's the point, a lot of assumptions and decisions get baked into every piece of code you push. All of that reasoning lives somewhere: in your head, in the code itself, in your team's collective memory.
Then AI Showed Up
Now developers are prompting GitHub Copilot, ChatGPT, or Claude to fill in boilerplate, write a test, complete a function, and sometimes even build an entire project from a single prompt. (Yes, it's iterative. Yes, context windows are bigger now. You know what I mean.)
The code is right there.
Leadership sees ROI (Return on Investment). Developers are asked to ship faster, I am seeing companies expect at least 3x faster. Soon, AI gets added at every stage: planning, coding, testing, code review, and deployment. Developers just eyeball it and ship it.
The entire process of making those decisions? Woosh. Gone. But that's not the biggest thing lost.
What Actually Disappears
Because of the urgency to deliver, the context in a developer's head start diminishing. First, there's information overload, the sheer volume of what AI generates. Second, the company expectation doesn't give room to sit with the code, Google around, think deeply. No time to cross-reference architectural docs, retrospectives, or what worked before.
Developers merge code they don't understand, written by a system that doesn't understand it, into production systems that depend on it.
Meanwhile, developers sleep peacefully (sort of), satisfied they shipped fast, mentally overwhelmed. Not really thinking anymore.
Then 3 AM happens.
The Middle of the Night Call
Something breaks in production. The on-call engineer wakes up. Now, they need to fix the issue. This becomes extremely challenging. The "why" behind every line of code becomes a black box. The AI had made reasonable suggestions based on training data, not on what has actually worked for your team. Context is missing everywhere.
A senior engineer used to reason through this quickly. Tracing the path from failure to root cause is harder with half-baked context. The urgency never allowed time to read the code with intent, step through it, understand it, and note what was happening as it was written. That increases mean time to resolution, which reduces the ROI leadership cares about. Overall, debugging gets harder, production issues increase, and domain knowledge starts shrinking.
So What Actually Works?
There's no escaping AI, and it has real perks. So the solution isn't to reject it. The solution is something harder: slow down enough to read it with intent.
The same intent that developed your strong developer instincts. Keep that intact.
When you generate code with AI, engage with it.
Question it. Does this align with how we've solved this before? With our constraints? With patterns that work for our team, not just statistically? Read previous code. Check architectural docs. Pull up that retrospective from the production failure last quarter and make sure you are not repeating it. Document what you are pushing and why, in your voice, in a way that helps teammates debug faster on-call.
Yeah, it takes time. It's not 3x faster anymore. It might be 1.5x or 2x.
But here's what actually happens: bugs go down, production incidents go down, and mean time to resolution goes down because someone understood the code well enough. The team's mental model stays intact because knowledge still flows. It just flows faster now, not eliminated. That adds up to a reliable software.
The Choice You Are Really Making
This is one of those situations where the choice between being fast and operating with intent becomes obvious only in retrospect. Usually at 3 AM with production down. Rather not wait for that lesson.
So read the AI-generated code with intent. Every time. Not because it's the right thing to do. But because your future self, the one getting paged at 3 AM, will thank you for it. Your team will thank you for it. And your actual ROI will be way better than leadership's spreadsheets predict.
What's your take on this? Have you felt this tension between speed and intent on your team? Drop your thoughts in the comments.
Top comments (0)