The stated intent of "Clean Code" is that any developer should be able to walk into any project at any time and be productive as soon as possible. Similarly, the intent of "Clean Architecture" is to ensure that every technical part of an application remains easy to replace, preventing the technical coupling of business logic with technology.
The goals are clear: clarity, simplicity, and maintainability. These are not academic ideals; they are economic drivers. They exist to ease implementation, minimize bugs, and guarantee the long-term adaptability of a system.
But has this been achieved with current best practices? Or is the current industry standard missing the point entirely?
A strange paradox has emerged. The industry has become obsessed with a version of "Clean" that prioritizes Technical Decoupling over Business Clarity. Software engineering has produced a generation of "Lego experts"—specialists who have mastered the art of connecting blocks. They understand every shape, every stud, and every tube. They can explain exactly how to inject one block into another and how to layer them into symmetrical rows.
But when tasked to "Build a fire station," the response is often a blank stare. The result is frequently a system where every part is technically "pure" and perfectly decoupled, yet the "Fire Station"—the actual business purpose—is nowhere to be found.
In the quest for technical perfection, modern architectures have become semantically hollow. Systems are perfectly layered but functionally unreadable. A developer can spend weeks learning how a dependency is injected without ever gaining a clear grasp of what the application actually does for the business.
The Brooksian Divide: Essential vs. Accidental
To understand why the "Fire Station" disappears, we must look to Fred Brooks. He distinguished between:
Essential Complexity: The inherent difficulty of the business problem itself (e.g., the rules of a risk analysis or an insurance claim).
Accidental Complexity: The difficulty created by the choice of tools (e.g., DI frameworks, microservice orchestration, or boilerplate).
In an ideal scenario, an application would consist only of Essential Complexity. The code would be a direct, legible mirror of the business requirements. However, many projects start "dirty"—not through carelessness, but because frameworks are chosen before the domain is understood. When a team decides a system will be "Microservices" or "managed by Spring" before mapping the domain, they let Accidental Complexity drive the Essential. This forces business logic to adapt to framework constraints, compromising longevity. If the framework evolves or dies, the business logic is taken hostage.
The Rich Domain Model as the Roadmap
If Essential Complexity is to be the roadmap, it must be visible. This is where the Rich Domain Model comes in. An object is not a brainless data container; it is the definition of a responsibility where behavior lives with data and invariants live where they belong.
We are often told these models are "Fat," but in reality, they are Encapsulated. When an object knows its own responsibilities, logic is located exactly where the data lives. This clarity is the foundation for avoiding the "Lego" trap.
The Case Against Technical Decoupling: A Modeling Comparison
Let us look at a simple requirement: The application must permanently store important documents. This is not a technical concern; it is a business invariant. If documents are not stored, the system is incorrect.
The Framework-First Interpretation (The Lego Path)
In a framework-driven architecture, the first modeling step is the implementation choice (e.g., Amazon S3). An S3Repository is introduced, an S3Client is injected into it, and the repository is injected into every component.
From a dependency perspective, this expresses a multiplicity that doesn't exist. It implies there could be many "Storages" when, for this business, there is only one. Infrastructure details (clients, credentials) flow into the domain via local constructors or @Inject attributes. Dependency Injection appears "necessary" here only to manage the fragmentation created by treating a singular capability as a swappable block.
The Essential-First Interpretation (The Structural Path)
In the business model, there is exactly one permanent storage capability. It is a singular, invariant landmark—like the Front Desk of a Hotel. A guest doesn’t carry a front desk with them; the front desk is a foundational part of the building that simply exists.
Modeled honestly, PermanentStorage is therefore singular by definition and expressed as behavior, not as an owned dependency: PermanentStorage.store(document)
The implementation choice (S3) is an accidental concern. PermanentStorage encapsulates this entirely. Configuration is loaded internally in a static initialization block; the S3Client is constructed privately. No infrastructure leaks out. Because the capability is singular and invariant, there is no dependency to inject and no variability to manage.
Deriving the Consequences
The critical observation is this: Due to Technical Decoupling, Dependency Injection becomes necessary only after the essential model has already been compromised.
In the framework-first model, infrastructure concerns pollute constructors and dependency graphs. In the essential-first model, permanent storage is represented as a singular capability. No object declares it as a dependency because it isn't one. There is no wiring, no lifecycle management, and no container configuration.
Wherever DI appears indispensable, it is worth asking: What essential variability does this dependency represent? In practice, the answer is almost always "none." The variability is technical, not business-driven. Clean Architecture is not achieved by adding indirection; it is achieved by ensuring code reflects the structure of the business. When a capability is singular in reality, modeling it as such is Structural Honesty.
The “But It’s Easier to Test” Objection
The common defense is that DI makes testing easier. This argument inverts the purpose of testing. Testing exists to validate business behavior, not to justify a fragmented model.
In a framework-first model, tests primarily validate structural compliance: that a service calls a repository, or that wiring behaves as expected. These tests are "easy" because they avoid exercising the real system.
In an essential-first model, the domain remains intact. Testing does not require mocks or containers. Instead, the internal implementation of the PermanentStorage landmark is swapped for a fast in-memory variant. A call to PermanentStorage.store(Document) executes the same business path in production and in tests. This results in fewer tests but higher confidence, as failures point directly to violated business rules rather than broken mocks.
Testing should adapt to the model. The model should not be distorted to accommodate testing.
The Singleton Problem: Why Inefficiency Looks Like Necessity
If this "Essential-First" approach is superior, why is the industry addicted to complexity? The answer lies in the Singleton Problem.
Every software system is a singleton. It is built once, under unique historical conditions, with no parallel reference implementation. You cannot build the same system twice—once with a Rich Domain Model and once with a "Magic" framework—and compare the 10-year maintenance costs in a lab.
As long as a system functions and doesn't catastrophically fail, its architectural quality remains unobservable from the outside. This creates a dangerous illusion: inefficiency looks like necessity, and accidental complexity looks inherent. In this vacuum, first-principles reasoning is the only way to evaluate quality.
Self-Documenting Code is a Litmus Test
The litmus test for Clean Code is simple: Could a business expert without programming knowledge read through the domain implementation and validate what is happening?
They may not understand the syntax, but the logic should be so explicit that they recognize their own business rules. Consider PermanentStorage.save(Document): a business expert understands that intent immediately. When we use "shortcuts" to hide code (like magic DI), we erase the roadmap and make it impossible for a human to trace the "thought" behind the code.
Conclusion: The Economic Reality of Structural Honesty
The original goals of "Clean" remain as valid as ever: a system where any developer can be productive immediately, where technical parts are easy to replace, and where business logic is never held hostage by technology.
However, the current industry default of Technical Decoupling frequently fails to meet these goals because it prioritizes the connection over the purpose. By focusing on the "Lego studs" (the injections and interfaces) rather than the "Fire Station" (the business goal), the industry has traded semantic clarity for structural ceremony.
The Architectural Balance Sheet
| Goal | The Current "Clean" Failure | The Essential-First Alternative |
|---|---|---|
| Productivity | Developers spend weeks deciphering DI graphs and Service Layers to learn what the application does. | New developers grasp intent in hours by reading the Rich Domain Model—the roadmap of the business. |
| Replaceability | Indirection is added everywhere "just in case," creating a fragmented maze that is actually harder to refactor. | Technical details are localized at the point of Responsibility. Swapping S3 for a local disk changes one landmark, not the whole system. |
| Maintainability | The system is tied to framework lifecycles; "accidental" updates consume months of development time. | The Core Logic is isolated from framework churn, remaining a maintainable asset for decades. |
Clean Architecture is not about being philosophically pure; it is about Structural Honesty. It ensures that the business purpose is the most visible thing in the codebase, turning code into a living reflection of real-world requirements.
The economic gains of this honesty are profound and compounding. When we stop building "Lego blocks" for hypothetical scenarios and start building the "Landmarks" the business actually requires, we slash the cost of change. Debugging becomes immediate because failures surface in the responsible domain objects, revealing business context rather than a tangled web of mocks and adapters.
Ultimately, by prioritizing expressiveness and the domain over tools and ceremony, we create software that endures as an asset rather than a liability. In a world of singleton systems, the only way to evaluate quality is through first-principles: start with the business truth, build with structural honesty, and let the tools serve the structure—never the other way around.
Top comments (0)