This article (talk) is based on an idea I have been thinking about for a while: while everyone is talking about AI, either from a hype perspective or a doomsday perspective...where is the middle?
Who is actually looking at the practicalities of AI, the gaps we have in organisations, and, above all, the human aspect of the new Intelligence Age?
So bearing in mind that it is written as a talk, please enjoy my partially formed idea (that needs some editing) as I think the key point of the talk: that understanding code is the new metric we should be paying attention to, is an important one.
The talk (draft):
The Inflation of Syntax
“In the age of AI, the most dangerous engineer in your organisation is no longer the slowest one. It’s the fastest one.”
- The engineer who can ship features at incredible speed.
- The team with the highest velocity.
- The dashboard that looks green every sprint.
These are not as fantastic as they may first sound.
In a world where code is cheap and infinite, speed no longer tells you whether you’re winning. It tells you how fast you’re accumulating something you may not be able to understand, fix, or control.
But I will come to that.
First, lets look at what software engineering used to be, and what has just fundamentally changed.
For 50+ years, everyone in this room was paid to know a secret language. We were paid to know the dictionary. We were paid to speak "Machine."
If a business leader wanted a feature: a new checkout flow, a search bar, a data pipeline, they had a problem. They had intent, but they didn't have the syntax. They needed us to translate their human intent into Java, into SQL, into React. We knew how to say things to machines that other people couldn't.
And because that skill was scarce, we charged a tax for that translation. That tax justified our salaries, our timelines, our agile rituals, and entire careers.
But something fundamental has shifted. Today, the machine speaks its own language. The dictionary is public. The translation tax is rapidly approaching zero.
I want to be clear: this doesn't mean software engineering is done. But it means the most visible part of our historical value, the part that people could see and appreciate - the actual act of turning English requirements into syntax - has been commoditised.
The obvious question we have to answer today is: If coding is no longer scarce, where does the value now lie in software engineering?
The Crisis: Inflation and The Efficiency Trap
I hear a lot of fear about "replacement." But what we are facing isn't a crisis of replacement. It's a crisis of inflation.
Think about basic economics. When you print too much money, the value of the currency drops. When you print code (which we can now do at almost infinite scale) the value of writing a line of code drops.
We are moving from an era of scarcity to an era of abundance. And this leads to the biggest mistake I see companies making right now.
I call it The Efficiency Trap.
Executives look at this abundance and think: "Great! If AI makes developers 50% faster, I can fire 50% of them and get the same output." (For those of you questioning that statement: that line was written by ChatGPT. It’s confident, crisp, and perfectly wrong. A 50% productivity gain means you fire a third of your team, not half. It ties in with the rest of this talk beautifully, so I left it in.)
But AI over-confidence aside: This thinking is fundamentally wrong. Executives see the short-term gain, but they’re blind to long-term complexity.
Abundance hides risk.
Each additional line of code, each new microservice, each new integration can quietly accumulate complexity faster than anyone can track it. The system may appear healthy. Dashboards may look green. But underneath, fragility grows.
This is where instincts betray us. We look at this explosion of output and think we need to measure it harder. Track it more closely. Optimise it more aggressively.
In the AI era, velocity is no longer a leading indicator. It’s a lagging one.
But worshipping at the altar of velocity metrics will ultimately be your downfall.
When you optimise directly for speed, you inflate complexity. When you optimise for understanding, for truly knowing what you’ve built and how it behaves, the system flows naturally, and velocity emerges as a by-product.
Teams that focus only on output will eventually stall, not because they’re slow, but because they are afraid to touch what they’ve built.
If you think maintaining your current legacy code is hard, wait until you have to maintain the mountain of code your team is about to generate.
The Pivot: Mean Time to Understanding
In any system, value always shifts to the constraint. If writing code is cheap, abundant, and fast... what is the new bottleneck?
The constraint is no longer writing software, it is understanding what was written.
The constraint is knowing what this generated blob of code actually does, where it behaves, and more importantly, where it misbehaves.
We need to stop obsessing over velocity and start obsessing over MTTU: Mean Time to Understanding.
DevOps and SRE teams have long used this term to measure the gap between an alert and a diagnosis; it is time the rest of us adjusted and adopted it for the code itself.
MTTU in the context of a developer is the time it takes a human being, who is not the original author, to confidently answer two questions:
- What does this code actually do?
- Where would I look in order to fix it if it breaks?
Here’s the kicker: if you optimise for MTTU...if you design systems and teams to maximise understanding...everything else follows. Velocity, reliability, maintainability, all of it improves naturally.
If you ignore understanding and chase output alone, the system will eventually stall. Not because you’re slow, but because complexity has grown faster than anyone can reason about it.
Every architectural decision you make from now on either compresses MTTU or inflates it.
The ability to quickly understand and verify code you didn't write, this is the skill that is the new core skill. This is where seniors have transferable skills in this new age.
Your years of pattern recognition, your mental models, your hard-won judgment actually pays dividends.
The AI can generate infinite solutions, but you are the one who can look at those solutions and understand what they actually mean for your system.
But with that in mind, let's look at why this metric and principle is so important.
The Diagnosis: Plausible Competence
The first problem is counter-intuitive. The problem is not that AI is bad at coding. The problem is that it is good. AI is good enough to be plausibly competent.
AI is not a junior developer. AI is an infinite yes-man. It biases toward creation.
It will always give you something.
If you ask an AI to solve a problem where the correct engineering answer is "don't write code, just change this config file," the AI will rarely tell you that. It will write you a 500-line script and bypass the config.
It builds what you asked for, not what you meant.
It produces code that looks correct. It uses the right libraries. It follows the right indentation. It might even pass the unit tests (that it wrote for itself of course).
If you've read Thinking, Fast and Slow by Daniel Kahneman, then you know about System 1 (fast, intuitive) and System 2 (slow, deliberate) thinking and the problem with this plausibly competent code will be familiar to you.
AI-generated code is good enough that it activates our System 1 thinking. We glance at it, it looks like valid code, the logic looks good at-a-glance, and we nod. "Looks great."
But this is the trap. We trust the plausibility, so we turn off our brain.
We stop simulating the code in our head. We accept the solution without stepping back to verify if it fits the nuance of our specific system. We never engage our slow and deliberate thinking.
And because we trust it too easily, we let it do things we would never let a human do. Which leads to the second trap.
The Trap: Complexity Sprawl
Here is how this behaviour shows up in the real world:
Imagine a team asks for a simple feature. Let's say, a web form to submit a username and an email address.
In the old world, a developer would groan, write a single file, hopefully a little validation logic, and ship it.
But now? The developer asks the AI to "handle the submission robustly."
The AI scaffolds. It generates a separate microservice for validation. It adds a retry queue for failed submissions (because queues are "robust"). It spins up a serverless function to sanitise input. It adds a distributed logging service just for completeness.
Technically? All of it works. Syntactically? It is perfect. But here is the trap.
AI is a Local Optimiser.
It looks at the ticket "make the form robust" and it solves that specific problem with maximum force. It doesn't care about the rest of your system.
That is your job. You are the global optimiser. You are the architect.
Your job is to know how that form fits into the user session, the billing system, and the legacy database.
But because the AI just generated five new services and three queues, it has created a complexity spike. To judge if this solution is correct, you now have to load all that new complexity into your head. You have to map out the failure modes of five services instead of just one file.
This is cognitive bloat. And this is where the danger lies.
When the solution becomes too complex to hold in your working memory, you stop verifying the architecture. You stop asking "Does this fit?" and you start accepting "It works."
You surrender your role as architect because the blueprint just became too messy to read.
The Constraint: Architectural Verification
This brings us to the hard limit. The limit isn't how fast we can write code. The limit is Serialisation Bandwidth.
To be the architect, to ensure a solution is safe, compliant, and correct, you must be able to serialise the logic into your brain. You have to be able to "run" the system in your mind. If I change X here, does it break Y over there?
We now have infinite generation bandwidth (the AI) feeding into fixed serialisation bandwidth (your brain).
Serialisation bandwidth is your brain's ability to hold the system in memory. No matter how fast AI types, if your mind can't trace the logic, the system is unsafe.
Now, the AI worshippers and optimists in the room (and the vendors selling you AI tools) will say: "That's fine! We'll just give the AI the context. We'll feed it the docs. We'll prompt it better."
This is the most dangerous lie in our industry right now. You cannot prompt for context you don't know is relevant yet.
Context in software is not a static database you can just upload to a vector store. Context is dynamic.
You don't know that the "User" object has a hidden dependency on the "Billing" object in another project until you see the specific way the code tries to mutate it. You don't know that this specific retry logic violates a new compliance rule until you see the queue being built.
Context is the collision between the code and the World.
The AI only sees the code. You are the only one who sees the real-world context.
If you cannot understand the code faster than it is created (and you can't), you lose the ability to spot those collisions. You lose the ability to govern the system.
The bottleneck isn't typing speed. The bottleneck is architectural verification speed.
Good vs Bad Velocity
And yet, look at our Jira boards. We still measure progress using Velocity. Features per sprint. Tickets closed. Lines shipped. That mismatch is how we manufacture cognitive debt.
In the new World: "Good velocity" is shipping features while keeping Mean Time to Understanding flat. "Bad velocity" is shipping features by spiking MTTU.
If you ship a feature in four hours using AI, but it takes three days for a senior engineer to understand how to fix it when it breaks next week, then you MTTU is way higher than your code velocity.
If your MTTU is too high, then you are in cognitive debt.
And this debt doesn't just stay in code. It leaks.
It shows up as:
- Longer incidents (because nobody knows where the root cause is).
- Slower onboarding (because new hires can't read the map).
- Feature paralysis (because everyone is afraid to touch the "magic code").
To further illustrate this, let me ask you something:
- How many of you have merged a PR in the last month that you didn't fully understand?
- Keep your hand up if you told yourself you'd come back and review it properly later.
- Keep your hand up if you actually did.
Yeah. I didn’t do it either!
This is why we need a new protocol.
The Protocol: Spec-Driven Development
Don't worry. I am not talking about waterfall here with specs...just in case you were about to zone out!
So far I've told you what's broken and why. But I don't want you walking out of here just feeling anxious. I want you to have a protocol you can use tomorrow. So let me get specific.
We have to change the protocol.
We need to move to AI and MTTU friendly Spec-Driven Development.
In this new world, we don't start by reading code. We start by establishing the structure.
This happens in three distinct layers:
Layer 1: The Micro-Spec (Human Priming and Map)
This is your survival tool. This is what keeps you sane day-to-day. It is strictly for you.
It is the "Cognitive Handle." Before you generate a single line of code, you define the high-level intent in your head (or a sticky note).
- Goal: "Add a user archive feature."
- Constraint: "Must be reversible."
- Architectural Fit: "Does this belong in the User Service or the Admin Service?"
The Micro-Spec is your "BS and bad-fit Detector."
It allows you to look at the AI's output and instantly answer: does this code fit the shape of our system? Does this code look to have the functionality we asked for?
If the AI tries to rewrite the database schema for a simple UI change, the Micro-Spec tells you to reject it immediately.
Layer 2: The Main Spec (The Shared Contract)
This is what keeps everyone aligned.
Once you know the shape of the code, you need the substance.
This is the Main Spec. This is the document that both you and the AI rely on.
- For the AI: It is the instruction manual. It contains the acceptance criteria, the specific edge cases, the input/output definitions.
- For You: It is the checklist for Technical Verification.
After the AI generates the code (and you fix the obvious bugs), you switch from "Architect" to "Auditor." You compare the code against the Main Spec.
"Did it handle the null case we asked for?", "Did it implement the specific retry logic we defined?". You verify the Technical Requirements here.
If the Micro-Spec checks the Soul of the code, the Main Spec checks the Body.
Layer 3: The Global Context (The Subconscious)
This is your evolutionary system, this is what makes you faster over time. Finally, we have the force multiplier.
As we build, we will notice that we keep repeating certain instructions: "Use Tailwind." "Don't use ‘any’ types." "Follow the Repository Pattern."
We don't put these in every Spec. We offload them into Global Context.
This is where tools like CLAUDE.md files or .cursorrules files come in.
We treat these files as our Evolutionary Rule Set.
Every time the AI makes a stylistic mistake, we don't just fix the code. We update the Global Context. We add a rule.
Over time, this automatically pulls the AI closer to your engineering culture.
As this Global Context matures, the machine becomes an expert at your patterns. You will spend drastically less time reviewing syntax, imports, and folder structures.
So where does that saved time go? It goes to the one thing that can never be put into a context file: The ambiguity of the real world.
No matter how well the AI knows our codebase, it doesn't know our strategy. It doesn't know that the marketing team just changed the launch date. It doesn't know that the users hate that specific modal.
You stop being the Code Reviewer and start being the Reality Reviewer.
The Micro-spec is the soul, the flavour of the code.
The Main spec is the body, the substance of the code.
The Global Context is the subconscious, the guiding principles of the code.
But even with all that, you may be wondering where you are still valuable once the prompts and context are nailed down? What can humans do, that AI is very unlikely going to be able to do for the next 20 years?
The 3 Human Moats
Once you accept that role shift towards orchestration and evaluation, the "Prompting" argument dies completely. The AI cannot replace you, because the AI cannot cross these three Moats:
Moat 1: The Shadow Architecture (Context)
The AI suggests a solution based on the visible code. You reject it based on the invisible reality.
Example: The Load-Bearing Typo.
The AI scans your API and finds an error message: “Err (500)”. It suggests a "Best Practice" improvement: "Let's make this closer to common terminology: Internal Server Error (500)."
It is better DX, more descriptive and a better fit with industry terminology.
But you know the World Context. You know that the Ops team has a legacy monitoring script that greps the logs specifically for the string “Err (500)”. If it sees that string, it automatically restarts the server.
If you let the AI "fix" that message, the grep fails. The auto-restart fails. And the next time the server hangs, it stays down all weekend (or until you get a call to fix it...AI isn't going to get the call now is it?).
The AI sees the Text. You see the real world dependency. You cannot prompt for the bash script running on a server the AI doesn't know exists until it becomes relevant.
Moat 2: The Problem Definition (Value)
The AI answers the prompt exactly. You answer the implied need.
The business asks for "fast search."
- The AI spins up Elasticsearch for real-time responses, costing thousands.
- You know that "fast" just means "under one second," and a simple SQL query works for free.
With AI you are not doing less engineering, you are doing higher-order engineering.
Moat 3: The System Pessimist (Side Effects)
The AI is a master of the Local Fix. If you show it an error, it will make that error go away. It will write defensive code. It will catch the exception. But it is blind to Systemic Side Effects.
It fixes a database timeout by adding a retry loop.
It doesn't know that three layers up, the frontend also retries, and the load balancer also retries. It fixed the error locally, but it just created a Retry Storm that will take down your entire platform during the next traffic spike.
The AI looks at the file. You look at the blast radius. Your job is to ask: "If this code works perfectly, what else does it break?"
But this higher order engineering is not something you learn in bootcamps, so where do Juniors fit in to the new world?
The Junior: The Explorer
If you’ve been following along and you're early in your career and you're worried...good. That means you're paying attention. But here's what you need to understand: AI doesn't make you obsolete.
It makes you dangerous. *Seniors are constrained by their mental models. They know what "should" work, so they'll ask the AI to build the obvious thing. *
You don't have those priors yet.
You can ask the AI to try five different architectures in an afternoon without cognitive baggage. You're faster at exploration because you have less to unlearn.
And that brings me to a question the C-suite and hiring managers may still be asking: *why hire Juniors at all? If AI acts like a mid-level developer, why pay a human to learn? *
Because AI is an engine, not a driver.
The value of a Junior today is not "Code Production." It is Option Generation and Verification.
In the old world, if we weren't sure how to build a feature, we sent a Senior to prototype it for three days. That was expensive.
Now, we send a Junior. We say: "Here is the Spec. Use the AI to prototype three different ways to solve this. Map the trade-offs. Test the edge cases. Bring me the best one."
The Junior provides immediate ROI because they act as a Force Multiplier for the Senior.
They wade through the AI's hallucinations. They verify the libraries. They filter out the noise. They present the Senior not with blank text files, but with curated options.
(Metaphor): The Junior explores the AI jungle, mapping the paths the machine might take, bringing back the safe trails for the Senior.
They are valuable today because they do the legwork of exploration, allowing the Senior to make the high-leverage decision of selection. And in doing that work, they learn the judgment required to become Seniors.
The Senior: The Orchestrator
And the seniors in the room? You are the Orchestrators. You define the constraints. You decide where complexity is allowed and where it is forbidden.
You audit the seams where code touches other parts of the code base.
You ensure that while the AI writes individual notes, the melody remains coherent.
(Metaphor): Each AI-generated line is an instrument; only the Senior ensures it plays the right melody. You are the only one who can see the whole symphony.
The Conclusion: The Great Filter
I want to close by addressing the elephant in the room.
- We have established that AI provides infinite Generation Bandwidth.
- We have established that humans have fixed Understanding Bandwidth.
There is only one way to resolve that equation. It is not to read faster.
It is not to just "hire more people." The only way to survive is to become The Great Filter.
*In an era where adding code is free, the most expensive thing you can do is accept it. *
The highest-value engineering activity is no longer creating software. It is rejecting software.
If you cannot say no to code, you are no longer an engineer in this new intelligence Age. You are a deployment pipeline with a salary.
Your value is looking at a "working" solution from the AI and saying: "No. This spikes our Mean Time to Understanding. Delete it. Simplify it. Do it again."
We need to slow down the ingestion to match the speed of our digestion.
We need to be the ones who say: "Just because we CAN generate a microservice for this in 30 seconds, doesn't mean we SHOULD."
Your job is to stand in front of the floodgates, to control the flow, to direct it safely, to ship with care and precision.
In a world where code flows like a river, your responsibility is clear: Hold back the tsunami of code. Don’t let complexity and cognitive debt engulf and overwhelm your system.
Don't just build the software. Be the only reason the software is still understandable.
You are the great filter: AI might be able to generate endlessly, only you can decide what survives.
Today your career is no longer defined by how much you ship. It is defined by how much complexity you refuse to ship.
Be the constraint.
Protect your team's Mean Time to Understanding.
Thank you.
As I said, the talk needs some editing and polish, but I hope the theme and concept is thought provoking. I hope the message is clear.
Our role is evolving, but not so much that all of your skills are obsolete. Just that you might need to spend a little less time on leet code and a little more time on architecture and getting better and more diligent at reviewing code.
That you should be optimising for Mean Time To Understanding when you commit your code (or that generated by AI).
Let me know if you think this would make an interesting talk, or if you would be doom scrolling X in the back?
Top comments (2)
Let me know what you think, I think it would be an interesting talk and is not something I have heard anyone talk about before, but I am biased.
If it sucks, tell me why, if you like it, tell me why.
Any interesting thoughts around it that could make it even better? I definitely want to hear those! 💗
Love this perspective! I'm going to have to re-read that again more thoroughly, very interesting.