For a long time, “senior developer” was a fairly consistent signal. You expected someone who could hold a large architecture in their head, write clean code with low defect rates, debug almost anything, and reason about performance without guesswork. That bundle made sense because the hardest part of shipping software was often the execution layer: translating intent into correct, maintainable code at speed.
That bundle is breaking.
AI-assisted development is compressing the cost of producing plausible, working code. Not always. Not uniformly. But enough that “I can ship a lot of code quickly” is no longer a reliable proxy for deep seniority. In many teams, velocity metrics are starting to measure who is best at driving the tool, not who is best at building systems that survive contact with reality.
What AI Is Actually Commoditizing
AI is not replacing engineering. It is discounting a specific slice of it: first-pass implementation and the mechanical parts of refactoring. The tool is good at producing code that looks right, compiles, and often passes superficial tests. That changes the economics of execution.
What does not get discounted at the same rate is integration into a real system with real constraints: data contracts, failure modes, security boundaries, observability, and long-term maintenance. In practice, the bottleneck shifts from typing to supervision. You spend less time writing and more time specifying, verifying, reviewing, and correcting.
This is why you can see two realities at the same time. Some developers experience dramatic speedups on bounded tasks. Others experience slowdowns inside large, messy codebases because prompting, waiting, and review overhead replace keystrokes, and because the model lacks the local context that makes a patch truly correct.
What Is Rising in Value
Problem decomposition and system thinking become the differentiator because they convert ambiguity into an executable plan. When you are dealing with something like regulatory delta detection, the hardest part is not writing code. The hardest part is deciding where the complexity actually lives, and what you must make explicit so the system stays correct as the domain evolves. The choice between a graph database and a simpler model is rarely a “tech taste” debate. It is a tradeoff between query expressiveness, operational burden, debuggability, and change management.
Judgment under uncertainty becomes a senior marker because architecture is mostly irreversible decisions made with incomplete information. Moving from direct graph writes to a changeset-based approach with content hashing is not an implementation detail. It is a bet on how you will observe change, roll back safely, explain behavior to customers, and avoid silent drift. That decision quality is what compounds over months.
Context and domain mastery become a moat because they are earned, not generated. If you understand how CELEX identifiers behave in practice, how MiCAR compliance maps to document reality, or how jurisdictions interpret rules differently, you carry constraints that materially shape the architecture. AI can help you express that knowledge. It cannot reliably invent it. Without domain context, you get confident code that is wrong in the ways that matter.
Technical leadership becomes central because building systems is increasingly a multiplayer game. The question is whether you can create a design that other people can implement without constant back-and-forth, and whether you can write specifications that converge rather than fork. This is why a workshop like SDD Pills matters. It trains decision-making and clarity, not syntax.
Mentoring and knowledge transfer become leverage because the highest-value output of a senior engineer is often the improvement of everyone else’s output. AI amplifies this. Teams that learn how to bound AI usage with clear contracts, acceptance criteria, and review discipline get compounding returns. Teams that treat AI as an oracle get compounding debt.
The Uncomfortable Truth: Two Axes Have Split
There are now two skill axes that used to correlate and no longer do.
One axis is technical depth: how well you understand systems, tradeoffs, failure modes, and the long-term consequences of design choices.
The other axis is execution speed: how quickly you can produce working code.
Historically, depth and speed often moved together. Deep engineers tended to execute quickly because they saw the path. Today, you can get high speed with low depth by delegating thinking to the tool. That can look senior on dashboards and in weekly updates. It is not senior if the output is brittle, unobservable, and expensive to maintain.
The inverse also exists: high depth with lower raw output speed can still be very senior if the person consistently makes decisions that reduce risk, eliminate classes of bugs, and increase team throughput.
What This Breaks in Hiring and Promotion
Many organizations still reward visible output: commits, tickets closed, apparent velocity. AI makes these signals noisier because the cost of producing code has dropped, while the cost of validating correctness has often increased. The net effect is that the old metrics over-credit the wrong behaviors and under-credit the work that actually keeps systems stable.
The evaluation problem is that “code shipped” is no longer tightly coupled to “engineering done.” A senior engineer in 2026 is often the person who prevented the incident you never had, removed an entire category of future work by designing the right abstraction, or wrote a spec that made five people productive instead of confused.
What to Measure Instead
The most useful seniority markers become visible if you look for decision quality, not output quantity.
A senior engineer can take an ambiguous problem and produce a specification that is testable and unambiguous. They can make uncertainty explicit by stating what is known, what is assumed, and what the cost of being wrong looks like. They consistently surface non-functional requirements early, especially observability, maintainability, and security, because those are the constraints that explode later.
They use AI as a bounded tool. They know when to ask it for a scaffold, when to demand alternatives, and when to reject a suggestion because they understand the scaling and failure modes. Patterns like Planner, Executor, Reviewer work when they are treated as control systems with clear acceptance criteria, not as theater.
Why “Senior” Is Drifting Toward “Principal”
Role expectations are shifting. Senior used to mean “I can personally deliver complex work.” Increasingly it means “I can make the right decisions and increase the output quality of everyone around me.” That is closer to what many companies used to call principal or architect.
This shift is healthy if organizations adapt their evaluation criteria. It is painful if they do not. People whose main advantage was fast execution will feel the floor drop out, because execution has been discounted. People who were already strong in decomposition, judgment, and leadership will become more valuable, because those skills are now the constraint.
What I’m Seeing in Teams
The developers adapting best to AI-assisted development are usually the ones who already had strong mental models and strong taste. They can turn ambiguity into constraints, and constraints into evaluation. They do not confuse “working code” with “correct system.” They treat AI output as a hypothesis that must be verified against invariants.
The developers struggling are often those who outsource thinking. They can generate a lot of code quickly, but they cannot defend why the design is correct, what it will cost to operate, or how it will fail.
If you are seeing a blur between depth and apparent execution speed, that blur is real. The solution is not to ban AI or to worship it. The solution is to change what you reward, and to interview and promote for the skills that actually compound.
Top comments (0)