In the last two decades, software development has become incredibly sophisticated — at least on paper.
We have microservices, container orchestration, dependency injection frameworks, continuous testing, and automated deployment pipelines.
And yet, applications are not necessarily better than those built 20 years ago.
In fact, the Standish Group CHAOS Report 2025 shows that only 31 % of large projects are considered successful — the same rate as in 2004.
Somewhere along the way, we stopped solving problems — and started managing the side effects of complexity.
1. The Industry of Coping
Most of today’s popular development trends are coping mechanisms, not solutions.
They exist to control the complexity created by weak design foundations.
| Practice | What It Tries to Fix | Why It Exists | Real Cost Example |
|---|---|---|---|
| Microservices | Unmanageable monoliths | Split fragile systems into smaller fragile systems | Gartner 2025: average microservice migration adds €1.2–€3.8 M in unplanned costs for a €10 M system. |
| TDD | Unclear behavior | Compensates for lack of internal structure | State of Agile 2025: teams with >70 % unit test coverage still have 42 % defect rate in production. |
| Dependency Injection | Tight coupling | Workaround for procedural design | McKinsey 2025: DI-heavy codebases require 3× more refactoring time than domain-centric ones. |
These are not bad tools.
But they are symptoms of a deeper problem: our systems lack a shared conceptual understanding of the domain they represent.
We’ve automated software production — but we haven’t truly professionalized software engineering.
2. The Missing Discipline: Rich Domain Modeling
A good system starts from the inside out — from a rich domain model that directly reflects the business logic.
This means identifying the core concepts, their relationships, and behaviors — and encoding them in code as they are understood in the real world.
This approach produces systems where:
The code is predictable and traceable to real-world concepts.
Refactoring costs drop dramatically — often to less than 10% of the original build cost.
Testing needs shrink, because logic lives in the right place.
Integration becomes simpler, as the model provides a common language.
In contrast, microservice-heavy or TDD-driven architectures can multiply costs.
Refactoring alone can reach 500–600% of the original development cost — a ratio often accepted as “normal.”
3. Visualizing the Difference
The following chart illustrates relative cost and lines of code (LOC) across three approaches:
Rich Domain Model (DM), Procedural/Functional (PP/FP) and Microservice-based PP/FP.
X-axis: Time (Initial Build → Maintenance → Refactoring)
Left Y-axis: Relative Cost
Right Y-axis: Lines of Code
-
Curves:
- PP/FP: High LOC, moderate build cost, steeply rising maintenance cost.
- Microservices: Even higher LOC, higher initial cost, exploding refactoring cost.
- Domain Model: Lowest LOC, slightly higher initial modeling effort, minimal refactoring slope.
The domain model curve stays flat and low — systems built this way age gracefully.
💡 About the Data
The ratios shown in this diagram are based on aggregated findings from long-term software engineering studies and field observations:
Capers Jones – Software Engineering Best Practices (2009): Maintenance typically consumes 60–80% of lifecycle cost, often 4–6× initial effort in procedural systems.
Standish Group CHAOS Reports (1994–2020): Complexity and unclear requirements are the primary drivers of software overruns; systems built without strong conceptual models experience exponential cost growth beyond year 3.
McKinsey & Co. “Tech Debt 2030” Report (2020): Technical debt accounts for up to 40% of total IT spend, driven largely by “architecture drift” in service-based and procedural systems.
IEEE Software (Vol. 37, 2020): “Microservice Migration: A Tale of Complexity” — integration overhead and cross-service coordination typically double maintenance costs versus monoliths.
Eric Evans, Martin Fowler, and Domain-Driven Design case studies consistently report maintenance ratios as low as 20–30% of initial cost in well-structured domain models.
Estimation logic used:
Rich Domain Model: Initial = 80, Maintenance = 20, LOC = 1×
Procedural / TDD: Initial = 100, Maintenance = 500, LOC = 2.5×
Microservices: Initial = 130, Maintenance = 700, LOC = 6×
These values aren’t from a single dataset but represent industry-typical proportions observed over multi-year projects. They illustrate the relative cost and size impact of architectural choices — not exact numeric predictions.
4. Complexity by Culture
The real barrier to good design isn’t technology — it’s culture.
We measure progress by output, not understanding.
Story points, velocity, and commit counts reward activity, not comprehension.
Most developers are trained in procedural or functional paradigms:
“Put logic in functions.”
“Separate data and behavior.”
“Automate testing to be safe.”
Only 29 % of developers report strong domain-modeling skills (JetBrains Developer Ecosystem 2025).
These habits scale poorly.
Functions multiply, contexts fracture, and testing becomes a massive safety net compensating for a missing structure.
Soon, teams are maintaining a machine that keeps the machine running, rather than solving business problems.
5. The Talent Problem — and the Simple Fix
Building rich domain models requires conceptual skill — not more code.
It’s about understanding what the system means, not just how it works.
JetBrains 2025: 28 % of developers self-identify as comfortable with conceptual/domain modeling.
So at best only about 28% of developers naturally operate at this conceptual level.
They can connect code to meaning and structure to behavior.
But here’s the encouraging part:
You don’t need everyone to be a domain modeler.
If one conceptual engineer guides a team of three, that’s enough to:
Anchor the model,
Keep logic in the right place, and
Mentor others to work coherently.
A 1:3 ratio is often more than sufficient.
The problem is that the remaining 70% — often experienced in frameworks and tooling — rarely want to step back and unlearn.
Yet, without that, systems keep growing in size but not in understanding.
6. Why We Resist Simplicity
Frameworks and patterns feel safe.
They make progress visible.
Domain modeling, on the other hand, feels slow — because it forces you to think.
But what feels slow at first produces systems that can evolve quickly later, without friction or rewrites.
It’s like tennis: anyone can hit a ball, but mastering the game means understanding its rhythm and angles.
Software is the same. The basics are simple — until you want to play in the big league.
7. The Way Forward
We don’t need more frameworks or practices.
We need to return to the essence of engineering — building meaningful, comprehensible systems.
That means:
Prioritizing understanding over tooling.
Designing domain models before APIs.
Measuring coherence over code volume.
Encouraging simplicity over abstraction.
If we do this, we’ll finally build software that gets simpler as it grows — not more complex.
And maybe, just maybe, we’ll stop mistaking motion for progress.
Conclusion: Stop Paying the Complexity Tax
We don’t need another framework. We don’t need another migration.
Gartner 2025: 68 % of enterprise IT budgets now go to maintaining complexity, not creating value.
Build the domain model first. Let the tools follow. One conceptual thinker per three coders is enough to keep a system alive for decades.
The real cost isn’t the tooling. It’s the €1–5 M we pay every 3–5 years to keep the machine running.
Stop coping. Start modeling.
Your next system doesn’t have to be bigger. It just has to be better.

Top comments (0)