There is a distinction in software development that the industry has spent twenty years pretending doesn't exist. It is the distinction between building software and understanding what you are building. The first is implementation. The second is engineering. They are not the same thing, they do not require the same skills, and conflating them is the single most expensive mistake a development organisation can make.
The mistake is now being turbocharged by AI. But to understand why, you first need to understand what was already broken.
Part One: The 85% Nobody Talks About
Two Kinds of Work
Ask most developers how long it takes to build a feature and they will give you an implementation estimate. How long to write the code, wire up the endpoints, get the tests green. That estimate — the part where fingers meet keyboard — accounts for roughly 10 to 15 percent of what building good software actually requires.
The other 85 to 90 percent is structuring. Understanding what the system is. Identifying where things belong, not just where they are needed. Naming the concepts correctly. Finding the natural boundaries in the domain. Modelling the business so that the code expresses it rather than merely approximating it.
This is the work that determines whether a system is maintainable in year five, extensible in year seven, or being quietly replaced shortly after.
Most systems are being replaced by year seven. The 85% was skipped.
Three Approaches, One Honest Assessment
There are essentially three ways to approach building a system, and only one of them qualifies as engineering.
The first is upfront design. You model the domain completely before writing code. The risk is rigidity — the model is fixed before the code has had a chance to reveal its gaps. Reality has a way of not fitting the diagram.
The second is evolutionary modelling. You begin with a hypothesis about the domain and use code as a feedback instrument. The model and the implementation refine each other continuously. An hour into implementation the starting model may have changed dramatically — a new concept discovered, a responsibility reassigned, a boundary redrawn. That is not failure. That is the process working. The model remains the authority throughout, but it is a living authority — responsive and correctable, never frozen.
The third approach is template filling. You select a framework. You receive a user story, which functions as a work order. You find the place in the template where this kind of story goes. You implement it there. You close the story.
There is no model in this process. There is no conceptual centre. The framework is the authority, and the code documents what the framework was configured to do. Frameworks turned engineers into assembly line workers, and the Singleton Paradox — the impossibility of comparing the system that was built using approach A against the system using approach B that was never built — hid the cost. This is not a different kind of design. It is the absence of design, wearing design's clothes.
The Model as Construction Tool, Discovery Tool, and Filter
The perception is that domain modelling is slow — that it delays visible output while the team thinks instead of ships. The reality is the opposite.
A domain modelling session is twenty to thirty minutes at a whiteboard, followed by code that shapes the actual business interactions. This is not a prototype or a spike. It is production code — the domain coming into existence, business logic finding its natural form. By the end of the first day there is working code that expresses what the business does. The template developer, meanwhile, is configuring YAML, wiring injections, setting up repositories. The motion looks productive. Not a line of it describes the business.
This is the construction side of what a domain model does. But it has two further functions that are equally important.
It is a discovery tool. When implementation is hard — when a concept resists being placed, when a responsibility has no natural home — that difficulty is information. The model is telling you something is missing, or something is wrong. A trial-and-error developer experiences this friction as a local problem to solve locally. A modelling developer experiences it as the domain asking to be understood more precisely. The response is not a workaround. It is a model refinement.
It is also a filter. If a behaviour cannot be fitted naturally into the model — if no object has a clear reason to own it, if it contradicts what the model already captures — that resistance is a signal. Either the model needs a new concept, or the behaviour itself does not belong in the system. The model's inability to absorb something cleanly is not a failure of the model. It is the model doing its job, filtering out accidental complexity dressed as a requirement. If you cannot fit behaviour into the model, you probably do not need it.
A domain model is simultaneously the thing you build with, the instrument that tells you what you are missing, and the filter that tells you what does not belong. The industry has largely stopped building them.
Essential Complexity vs Accidental Complexity in Code
The essential/accidental distinction from Fred Brooks is not just an architectural principle. It applies at the level of every object, every responsibility, every line of code — and getting it right at that level is what separates systems that age well from systems that don't.
Consider a practical example. When building a system that communicates with external services, the essential complexity is what those communications are — what a request contains, what posting requires, what the business needs to express. The accidental complexity is how those communications happen — the transport protocol, the connection handling, the session management, the specific library in use this year.
Model the responsibilities first. A client object owns the mechanics of communication. A request abstraction defines what communication content looks like. A posting variant adds what posting specifically requires. These are modelled as business responsibilities, technology agnostic. The how — whether the underlying transport is HTTP, MQ, or a database — sits entirely behind those responsibilities, invisible to everything that depends on them.
The consequence is significant. The technology can change completely — from web service to message queue to direct database write — without touching a single line of the business logic that constructs and uses those requests. The essential complexity is stable. The accidental complexity is genuinely replaceable.
This is not an interface trick. It is what happens when you model responsibility first and let technology serve the model, rather than letting technology shape what responsibilities are possible. The difference only becomes visible when the technology needs to change — which it always does, eventually. At that point, a system where accidental complexity was kept genuinely separate from essential complexity absorbs the change quietly. A system where the framework grew roots into the business logic requires the business logic to change when the framework changes. The technology that was supposed to serve the domain ends up constraining it instead.
The User Story as Work Order
Something specific happened to the user story as agile methodology was industrialised. It began as an invitation — a prompt to have a conversation with a domain expert, to understand a piece of the business well enough to model it. It became a specification. Then a work order. Then a checkbox.
In its current form the user story arrives at the developer already closed. The conversation with the domain expert happened upstream, in refinement, in planning, in the product owner's head. The developer receives a summary and works from that. The question the developer asks is not "what is this telling me about the domain" but "where in the template does this go."
The diagnosis is visible in what developers say when asked where the hard part of a system is. A template developer describes framework complexity — which abstraction to use, which pattern applies, how to configure the integration. A modelling developer describes domain complexity — what the business is actually doing here, what concept is missing, what existing object is being asked to carry weight it was not designed for.
These are not the same question. They do not produce the same system. And over seven years, the difference between the systems they produce is not marginal.
Where Things Belong vs Where They Are Needed
The most consequential difference between template filling and domain modelling is not visible in the first sprint. It becomes visible in maintenance, and it compounds with every passing year.
A template developer fixes problems where they occur. A table misbehaves on page B, so page B gets adjusted. The fix works. The story is closed. What is not visible is what has just happened structurally: page B now owns part of the table's behaviour. The table behaves one way on page A and another way on page B, and both pages carry part of the responsibility for what the table does. The next developer to touch either page must understand both. Maintenance has doubled, invisibly, for that one component.
A modelling developer asks a different question: what owns this behaviour? The answer is the table itself. The table owns its own presentation. The page owns its usage of the table. A fix to the table propagates everywhere the table is used, because behaviour lives in the component, not in the pages that consume it.
This is not an aesthetic preference. It is the mechanical difference between maintenance costs that stay flat and maintenance costs that compound.
Multiply this pattern across a codebase over five years and you have the prototype in production — a system held together with toothpicks, paperclips, and glue, where every workaround is load-bearing and every change requires understanding not what the system is, but what it has become.
The difference between fixing the problem where it occurs and fixing it where it belongs is the difference between prototype code and production code. At scale, it is the difference between a system that costs the same to maintain in year seven as it did in year one, and a system that is already being rewritten.
The Contradiction Problem
There is a specific consequence of building without a domain model that becomes critical at scale. It is underappreciated, and AI makes it significantly worse.
A domain model is not just a design preference. It is a contradiction-detection mechanism.
When business logic has a conceptual centre — a well-named domain object that owns its own behaviour — contradicting rules become visible. If two requirements make incompatible demands on the same object, you encounter the conflict when you try to model it. The structure surfaces the problem before it reaches production.
When business logic is scattered — across service methods, event handlers, configuration files — contradictions are invisible until they collide in production. Two requirements can contradict each other completely and coexist undetected for months, because there is no common reference point that would make the conflict visible. The system implements both rules, resolves the conflict arbitrarily at runtime, and produces behaviour that nobody designed and nobody can explain.
CQRS, microservices, and event-driven architecture were proposed, in part, as responses to the complexity that accumulates without a domain model. The tragedy is that they add architectural elaboration without supplying the missing conceptual centre. They do not make contradictions visible. They distribute logic across more moving parts, which makes contradictions harder to see, not easier. The problem is obscured by the solution.
Part Two: AI Became the Framework
The Same Pattern, Faster
Which brings us to the present moment, and to the claim that AI is transforming software development.
It is. But not in the way most of the conversation assumes.
There are two ways to think about AI-assisted development, and they map precisely onto the distinction between design and template filling established in part one.
AI is revolutionary in the sense that you conceive what you want, express it, and something builds it. The implementation barrier has been dramatically lowered. Code that would have taken days takes minutes. This is real and significant.
But AI-assisted development is also pure template filling. You are not modelling. You are instructing. The output is code that documents what the prompt said, with AI as the framework. The assembly is faster, the templates are more flexible, the results are more immediately impressive. The absence of a modelling process is identical.
And it inherits both failure modes simultaneously.
From upfront design, it inherits rigidity at the point of prompting. The model — such as it is — is fixed in the prompt. The code cannot talk back, because you are not in dialogue with it. You are receiving output. The feedback loop that makes evolutionary modelling work — where implementation friction becomes structural insight — is broken. The AI absorbs the friction. You never feel it. You never learn from it.
From template filling, it inherits the absence of a conceptual centre. The logic lives in the prompts, scattered and unreconciled, exactly as it lived in the fat services and event handlers before it. Except now it is even less visible, because a service class at least had a name and a location in a codebase. A prompt has neither.
The framework abstracted the developer from the infrastructure. AI abstracts the developer from the code. Each layer of abstraction makes "it works" faster to achieve and the absence of a domain model harder to see.
What the industry is currently calling "AI produces spaghetti" is not a new problem. It is framework templating amplified. The spaghetti was already there. AI makes it faster to produce, more voluminous, and more convincing — because it arrives in clean syntax with passing tests. The structural absence underneath looks better than ever.
AI did not replace the framework. AI became the framework. And it inherited the same problem the framework always had — it can build anything except an understanding of what you are building.
The Maintenance Proposition Does Not Hold
The proposition being made for AI-assisted maintenance is that rewrites are now cheap, so structural problems do not accumulate the same way. This deserves examination.
A rewrite can reproduce the syntax of a system faster than ever before. What it cannot do is verify that the rewrite is correct in the only sense that matters for a business system — that it accurately represents what the business actually does. Correctness here is not syntactic. It is semantic. It requires a reference against which to check the implementation.
The reference is the domain model. And the domain model is exactly what was never built.
So the rewrite, however fast, produces new code that implements the same contradictions, the same scattered logic, the same implicit assumptions. It is not a fix. It is a reprint. The toothpicks are replaced with newer toothpicks. The paperclips are shinier. The structure is identical.
Consider the contradiction problem at scale. Two prompts with conflicting business logic — you will probably spot it. Twenty — possibly. Eighty — almost certainly not. There is no structure that makes the contradiction visible. A rewrite from those eighty prompts does not resolve the contradiction. It reproduces it in fresh syntax. And in another cycle, the same conversation about rewriting will begin again, for the same undiagnosed reasons.
What Disappears and What Doesn't
Frameworks will likely disappear, and probably sooner than the industry expects. Hibernate exists because writing database session management by hand is tedious and error-prone for humans. AI has no such limitation. It can write the queries, manage the sessions, handle the mapping — contextually, specifically, without a generic abstraction layer designed for every possible use case. The framework was a productivity tool for human limitations. As those limitations are removed, the justification for the framework dissolves. This is not a loss. Frameworks were always accidental complexity — complexity introduced by tools rather than by the problem itself.
But the domain model does not disappear with the framework. It becomes more critical. Because the framework, for all its costs, at least imposed some structure. Generic, clumsy, domain-agnostic structure — but structure nonetheless. Without it, and without a domain model, the only thing standing between a system and total architectural entropy is the conceptual model in the developer's head.
Or its absence.
The Skill That Cannot Be Prompted
The ability to model a domain — to hold a structural representation in your head, refine it through implementation, and express it in code that means something beyond its own execution — does not appear to be a skill that AI can supply or that prompting can replicate.
It appears to correlate with a specific kind of spatial reasoning: the ability to see a three-dimensional object from its two-dimensional components, to hold structure in the mind and manipulate it without losing the whole. Developers who have this skill behave differently when they encounter implementation friction. Where a template developer sees a local problem to solve locally — a fix applied where the problem occurs rather than where it belongs — a modelling developer sees structural information. The friction is the domain asking to be understood more precisely. The response is not a workaround. It is a model refinement.
You cannot prompt your way to that response. The prompt eliminates the friction. And the friction was the signal.
The Only Honest Measure
There is a simple diagnostic for whether a system was built or merely assembled. Apply it after seven years.
Is maintenance getting cheaper or more expensive? A well-modelled system gets cheaper — the model matures, the team internalises it, changes become faster as understanding deepens. A template-filled system gets more expensive, as accidental complexity compounds and each change must navigate the accumulated residue of earlier decisions made without a model.
Are new requirements getting faster or slower to absorb? A well-modelled domain accelerates — each addition deepens understanding and reveals where the next extension naturally fits. A system without a conceptual centre slows — each requirement negotiates with the existing tangle rather than extending a coherent structure.
Has the rewrite conversation started?
The rewrite is not a sign of business ambition or technical progress. It is the bill arriving for the 85% that was skipped. And it will reproduce the conditions that made it necessary, because the organisation never learned what actually went wrong. The diagnosis will be "technical debt" or "legacy architecture." Rarely will it be accurate: no domain model was ever built, and without one, the rewrite begins the same accumulation from sprint one.
AI makes none of this cheaper in the long run. It makes the first two years cheaper and the subsequent five more expensive, because the prototype is produced faster and looks more convincing, and the discovery that it is a prototype comes later and costs more.
The 85% cannot be prompted. It cannot be templated. It cannot be abstracted away by a sufficiently powerful framework, however intelligent that framework becomes.
It requires understanding what you are building.
That has always been the hard part. It remains the hard part. And the industry's increasing sophistication at avoiding it is not progress.
It is a more expensive way of arriving at the same rewrite conversation, on roughly the same schedule, having learned roughly the same nothing.
Top comments (0)