<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Leon Pennings</title>
    <description>The latest articles on DEV Community by Leon Pennings (@leonpennings).</description>
    <link>https://dev.to/leonpennings</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/leonpennings"/>
    <language>en</language>
    <item>
      <title>Parts in transit - Why most distributed systems are prematurely complex</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Sun, 10 May 2026 20:29:58 +0000</pubDate>
      <link>https://dev.to/leonpennings/parts-in-transit-why-most-distributed-systems-are-prematurely-complex-378e</link>
      <guid>https://dev.to/leonpennings/parts-in-transit-why-most-distributed-systems-are-prematurely-complex-378e</guid>
      <description>&lt;h2&gt;
  
  
  The incomparability problem
&lt;/h2&gt;

&lt;p&gt;Here is a question that has no clean answer.&lt;/p&gt;

&lt;p&gt;How do you know whether the architecture you chose was the right one?&lt;/p&gt;

&lt;p&gt;Not right in the sense of working — most systems work, eventually, after enough effort. Right in the sense of optimal. Right in the sense that the complexity you introduced was warranted by the problem you were solving, and that a simpler approach would have cost more rather than less.&lt;/p&gt;

&lt;p&gt;The honest answer, in most cases, is that you cannot know. Because the alternative was never built.&lt;/p&gt;

&lt;p&gt;This is not a gap in the data. It is the mechanism of the problem. Most systems are built only once. There is no second system built with different assumptions, run for five years, and compared on total cost of ownership, ease of change, and operational stability. The counterfactual does not exist. Therefore the cost of the wrong choice — if it was the wrong choice — is permanently invisible.&lt;/p&gt;

&lt;p&gt;None of this is to say that distributed systems cannot work. Many organisations have made them function, sometimes at considerable scale — usually through exceptional engineering discipline, strong platform investment, and genuine operational maturity. The question is different: how much of the total effort, over years, went into managing the consequences of the distribution itself, rather than advancing the domain? And would a simpler boundary choice have delivered more value with less sustained overhead? The counterfactual remains hard to prove, which is precisely why we need sharper prospective indicators.&lt;/p&gt;

&lt;p&gt;And here is what makes the problem genuinely difficult: the entire industry tends to converge on the same patterns at the same time. When every team uses a similar stack, incurs similar coordination overhead, and grows to a similar size — those costs stop being visible as costs. They become the definition of what software costs. Normal and wasteful become indistinguishable.&lt;/p&gt;

&lt;p&gt;So the question sharpens. If we cannot compare architectures retrospectively, is there anything we can measure prospectively — before five years have passed — that gives us a leading indicator of whether we are building something appropriately simple, or something unnecessarily complex?&lt;/p&gt;

&lt;p&gt;There is. And it comes from an unlikely place.&lt;/p&gt;




&lt;h2&gt;
  
  
  The warehouse and the system boundary
&lt;/h2&gt;

&lt;p&gt;Consider an order fulfilment operation. An order arrives. A picker walks to the rack holding the product, picks it, and places it on the assembly line. Routine.&lt;/p&gt;

&lt;p&gt;Now consider what happens when that order is cancelled.&lt;/p&gt;

&lt;p&gt;If the picker has not yet left the rack, cancellation is a system operation. One record updated. The state change is contained. The cost is negligible and the outcome is certain.&lt;/p&gt;

&lt;p&gt;If the picker is already walking the floor — part in hand, mid-transit — the picture changes entirely. The picker must be located and reached. The instruction must be communicated and confirmed. The picker turns around, returns the part, re-shelves it in the correct position, and logs the return. The assembly line must be told the part is not coming and adjust accordingly. Each of those steps can fail. Each failure requires its own recovery. If the picker has already placed the part on the line, someone else must retrieve it, the line has already reacted to its arrival, and the cleanup compounds further.&lt;/p&gt;

&lt;p&gt;The correction costs more than the original action. Not marginally more — multiplicatively more. More people, more coordination, more opportunity for secondary failure, and a system left in a state requiring verification before it can be trusted again.&lt;/p&gt;

&lt;p&gt;This is the principle that makes architectural cost measurable before a system is built:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;As long as domain actions happen within a single system boundary, the cost of failure is a rollback. The moment actions propagate outside that boundary, the cost of failure becomes coordination.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is not a preference. It is a structural property of distributed systems, and it applies regardless of how well the coordination is engineered. You can manage the cost with better tooling. You cannot eliminate it. It is inherent to the boundary crossing.&lt;/p&gt;

&lt;p&gt;The warehouse makes this visible in a way that software obscures. In the warehouse, you can see the picker walking. You can see the empty rack. You can see the stalled line. The cost of the part in transit is physically apparent. In software, the equivalent states — the uncommitted saga step, the unacknowledged event, the stalled compensating transaction — are invisible unless you built dedicated instrumentation to see them. The cost is identical. The visibility is not. That invisibility is precisely why the cost became acceptable.&lt;/p&gt;

&lt;p&gt;The well-run warehouse minimises the time parts spend in transit, because parts in transit are the expensive state. The leading indicator of a well-designed system is the same: how much of the domain work happens within a single rollback boundary, and how much crosses outside it?&lt;/p&gt;

&lt;p&gt;Rollbackability — the degree to which a failed action can be fully undone by the system without external coordination — is a concrete, prospective benchmark for simplicity. If you are designing a system and the failure path requires coordinating compensation across multiple services, you have already committed to a significant and permanent cost. The question is whether the benefit justified it.&lt;/p&gt;

&lt;p&gt;In most cases, that question was never asked.&lt;/p&gt;




&lt;h2&gt;
  
  
  A concrete example: order creation
&lt;/h2&gt;

&lt;p&gt;Take a canonical domain flow: an order is created, inventory is reserved, an invoice is generated, a shipment is planned. Four concepts. One business action. It either succeeds completely or it does not happen.&lt;/p&gt;

&lt;p&gt;In a monolith with a well-modelled domain, this is the entirety of the orchestration:&lt;/p&gt;

&lt;p&gt;java&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Transactional&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;OrderConfirmation&lt;/span&gt; &lt;span class="nf"&gt;createOrder&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OrderRequest&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;Order&lt;/span&gt; &lt;span class="n"&gt;order&lt;/span&gt;       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Order&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="nc"&gt;Inventory&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;reserve&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="nc"&gt;Invoice&lt;/span&gt; &lt;span class="n"&gt;invoice&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Invoice&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="nc"&gt;Shipment&lt;/span&gt; &lt;span class="n"&gt;shipment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Shipment&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;OrderConfirmation&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;order&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;invoice&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;shipment&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The database transaction is the system boundary. If anything fails, nothing happened. The domain concepts — Order, Inventory, Invoice, Shipment — do the work. The technology serves them. Rollbackability is total. The failure path costs nothing beyond the failed attempt itself.&lt;/p&gt;

&lt;p&gt;This example is deliberately straightforward — but the principle holds as domain complexity increases. In fact, the more complex the domain, the more important it becomes that the infrastructure does not add noise. A complex financial workflow with regulatory holds is hard enough to reason about correctly without the additional burden of distributed coordination, partial failure states, and eventual consistency layered on top of it.&lt;/p&gt;

&lt;p&gt;Now split those four concepts across four services. The business requirement has not changed by a single word. What changes is everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  The infrastructure required before writing a line of business logic
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;A message broker.&lt;/strong&gt; Services cannot call each other synchronously if you want any resilience. Kafka or RabbitMQ: a three-node production cluster, topic design, schema registry, retention policies, consumer group monitoring, and a local development environment every developer must run and maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Saga infrastructure.&lt;/strong&gt; There is no transaction. Coordination must be made durable — if the orchestrator crashes mid-flow, it must resume from the correct step. This means a saga framework (Axon, Temporal, AWS Step Functions — each a substantial system with its own operational model and learning curve) or a hand-rolled saga state table with step tracking and a crash recovery process. Either way, there is now a fifth service whose entire existence is accidental complexity. It owns no domain concept. It exists solely because the transaction boundary was removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Distributed tracing.&lt;/strong&gt; Four services produce four independent log streams with no shared identity unless you build one. Jaeger or Zipkin for the trace infrastructure. Every service propagates a correlation ID in HTTP headers, event envelopes, and log output. A log aggregation stack on top, because reconstructing an incident across four separate log streams without tooling is not a debugging workflow — it is an archaeology project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Idempotency handling — in every service.&lt;/strong&gt; Message brokers guarantee at-least-once delivery. The same event will arrive twice. Every consumer must handle this without creating two invoices or two shipments. An idempotency key strategy per event type. A deduplication store — typically a processed-events table — checked on every inbound message. This is not a framework you install. It is code you write, in every service, correctly, and maintain forever.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compensating transactions — per failure path.&lt;/strong&gt; The rollback equivalent. Designed, coded, tested, and maintained per service per failure scenario. For four services the paths are: inventory fails — cancel order; invoice fails — release inventory, cancel order; shipping fails — void invoice, release inventory, cancel order. Each compensation is a domain operation that must exist, be reachable, be idempotent, and be tested both in isolation and in combination. The failure paths grow as O(n²) with the number of services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API contracts and versioning.&lt;/strong&gt; In a monolith, a method signature change is a compiler error caught before deployment. Across services it is a potential production incident. OpenAPI specifications or event schemas in the schema registry. A versioning strategy for deploying new service versions while old ones are still running. Consumer-driven contract tests — an entirely new test layer that did not exist before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Per-service operational overhead — multiplied by four.&lt;/strong&gt; Each service needs its own CI/CD pipeline, its own database (shared databases between services defeat the architectural purpose), its own health checks, its own deployment configuration, its own secret management, and its own database migration strategy.&lt;/p&gt;

&lt;p&gt;None of this is business logic. All of it requires expertise to operate correctly. In practice it means a platform or infrastructure team to own the broker and deployment infrastructure, application developers who understand distributed systems failure modes rather than just domain logic, and an ongoing operational load that scales with the number of services — not with the complexity of the domain.&lt;/p&gt;




&lt;h2&gt;
  
  
  The cost, made visible
&lt;/h2&gt;

&lt;p&gt;The following table makes the prospective cost explicit — before the first line of business logic is written, and before five years have passed.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concern&lt;/th&gt;
&lt;th&gt;Monolith&lt;/th&gt;
&lt;th&gt;Microservices&lt;/th&gt;
&lt;th&gt;What the split actually costs&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Atomicity and failure&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rollback on failure&lt;/td&gt;
&lt;td&gt;Database transaction. One word.&lt;/td&gt;
&lt;td&gt;Saga pattern. Hundreds of lines.&lt;/td&gt;
&lt;td&gt;Design, code, and test a compensating action per service per failure path. O(n²) paths for n services.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Partial failure state&lt;/td&gt;
&lt;td&gt;Impossible. Transaction is atomic.&lt;/td&gt;
&lt;td&gt;Permanent possibility. Must be designed around.&lt;/td&gt;
&lt;td&gt;Order exists, invoice does not. Every consumer of your data now reasons about completeness. Forever.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Consistency&lt;/td&gt;
&lt;td&gt;Immediate. Guaranteed.&lt;/td&gt;
&lt;td&gt;Eventual. A property you live with.&lt;/td&gt;
&lt;td&gt;Not solvable with better tooling. A structural consequence of the boundary choice.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Infrastructure before business logic&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Message broker&lt;/td&gt;
&lt;td&gt;None.&lt;/td&gt;
&lt;td&gt;Kafka or RabbitMQ. 3-node cluster.&lt;/td&gt;
&lt;td&gt;Topic design, schema registry, retention policy, consumer group monitoring, local dev setup.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Saga / orchestration&lt;/td&gt;
&lt;td&gt;None.&lt;/td&gt;
&lt;td&gt;Axon / Temporal / hand-rolled plus a fifth service.&lt;/td&gt;
&lt;td&gt;Durable saga state, crash recovery, step tracking. An entire service that owns zero domain concepts.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Distributed tracing&lt;/td&gt;
&lt;td&gt;One stack trace.&lt;/td&gt;
&lt;td&gt;Jaeger / Zipkin plus correlation IDs everywhere.&lt;/td&gt;
&lt;td&gt;Every service propagates trace IDs in headers, event envelopes, and log output. Log aggregation stack on top.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Idempotency&lt;/td&gt;
&lt;td&gt;N/A. Methods are naturally idempotent.&lt;/td&gt;
&lt;td&gt;Required in every service. Always.&lt;/td&gt;
&lt;td&gt;Deduplication store per service. Idempotency key strategy per event. Written, maintained, tested forever.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API contracts&lt;/td&gt;
&lt;td&gt;Compiler. Free.&lt;/td&gt;
&lt;td&gt;OpenAPI / schema registry plus versioning strategy.&lt;/td&gt;
&lt;td&gt;Consumer-driven contract tests. A breaking change is a production incident. Another test layer that did not exist.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Per-service operational overhead&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CI/CD pipelines&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4+&lt;/td&gt;
&lt;td&gt;Independent versioning, deployment windows, rollback strategies. Coordination overhead on every release.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Databases&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4+&lt;/td&gt;
&lt;td&gt;Independent migration strategies per service. Schema changes coordinated across deployment boundaries.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Local dev environment&lt;/td&gt;
&lt;td&gt;One process.&lt;/td&gt;
&lt;td&gt;4+ services plus broker plus docker-compose.&lt;/td&gt;
&lt;td&gt;Onboarding measured in days not hours. Partial environments produce integration bugs that only appear in the full stack.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Debuggability and sustainability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debug a production failure&lt;/td&gt;
&lt;td&gt;One stack trace. One log stream.&lt;/td&gt;
&lt;td&gt;Reconstruct a timeline across 4+ log streams.&lt;/td&gt;
&lt;td&gt;Clock skew between services. Correlation IDs that were not propagated. Broker lag that shifted event order.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Bug surface&lt;/td&gt;
&lt;td&gt;Domain complexity only.&lt;/td&gt;
&lt;td&gt;Domain multiplied by accidental complexity.&lt;/td&gt;
&lt;td&gt;Each async handoff is a new class of timing bug. Compensating paths run rarely, are tested inadequately, and fail in production.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Codebase legibility&lt;/td&gt;
&lt;td&gt;Domain is the code.&lt;/td&gt;
&lt;td&gt;Domain distributed across event schemas and API contracts.&lt;/td&gt;
&lt;td&gt;"What does order creation actually do?" has no single answer. The behaviour is implicit in subscriptions across four codebases.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Maintenance cost over time&lt;/td&gt;
&lt;td&gt;Proportional to domain complexity.&lt;/td&gt;
&lt;td&gt;Domain plus accidental complexity.&lt;/td&gt;
&lt;td&gt;Accidental complexity does not reduce over time. Services accumulate. Contracts fossilise. Framework versions break. Teams leave.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Scaling&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Unit of scale&lt;/td&gt;
&lt;td&gt;The atomic action. Run more instances.&lt;/td&gt;
&lt;td&gt;Individual steps — which are not the bottleneck.&lt;/td&gt;
&lt;td&gt;Invoice creation and shipment planning are simple writes. They are not traffic hotspots. The decomposition solves a problem that does not exist.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Infrastructure to scale&lt;/td&gt;
&lt;td&gt;Load balancer plus N identical instances.&lt;/td&gt;
&lt;td&gt;Everything above, multiplied.&lt;/td&gt;
&lt;td&gt;All the saga, broker, and tracing infrastructure exists solely to reconstruct what the database transaction provided for free.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The scaling argument that is rarely examined closely
&lt;/h2&gt;

&lt;p&gt;The case for microservices typically rests on scalability. You can scale the parts that need scaling independently, rather than scaling everything together.&lt;/p&gt;

&lt;p&gt;This sounds rational until you ask what actually needs scaling.&lt;/p&gt;

&lt;p&gt;In an order creation flow, the bottleneck is almost never the invoice logic or the shipment record creation. These are simple writes that happen once per order. The thing that needs scaling is the number of concurrent orders being created — the atomic action as a whole.&lt;/p&gt;

&lt;p&gt;Scaling the atomic action requires a load balancer and N identical instances of one deployed artefact. Each instance connects to one database. The database handles concurrent transactions reliably, as it has for decades. The infrastructure cost is a fraction of the distributed alternative. The operational complexity is a fraction. The failure surface is a fraction.&lt;/p&gt;

&lt;p&gt;A well-modelled core domain is not large. This is not an aspiration — it is what remains when accidental complexity is removed. The essential logic of order-to-shipment fits comfortably in one process, understood by one team. What makes codebases large is not the domain. It is frameworks imposing their structure on domain code, duplication caused by unclear boundaries, accidental complexity accreting around poor models, and boilerplate generated by architectural patterns that do not fit the problem.&lt;/p&gt;

&lt;p&gt;Strip those out and the core is small, fast to deploy, cheap to run, and trivially scalable as a unit.&lt;/p&gt;

&lt;p&gt;The industry asked "how do we scale the parts?" before asking whether the parts needed to be separate. It then built an entire ecosystem of frameworks, patterns, and operational infrastructure to answer the first question — all solving a decomposition problem that, in most cases, did not need to exist.&lt;/p&gt;




&lt;h2&gt;
  
  
  When distribution is the right answer — and when the arguments do not hold
&lt;/h2&gt;

&lt;p&gt;Distribution has genuine use cases. They are narrower than the industry's adoption rate suggests, and several of the most commonly cited justifications do not survive close examination.&lt;/p&gt;

&lt;h3&gt;
  
  
  Physical and regulatory constraints
&lt;/h3&gt;

&lt;p&gt;The standard argument: if data must live in a specific jurisdiction for regulatory reasons, you need a distributed architecture.&lt;/p&gt;

&lt;p&gt;The better answer: replicate the full domain logic into that regulatory cell. The atomic action stays atomic. The cell — with its own deployment, its own database, its own complete stack — is the unit of distribution. What you do not do is split the domain action across a jurisdictional boundary, routing parts of it between regions. That creates the coordination cost of distribution without the isolation that justified it. The constraint is geographic. The solution is geographic deployment of the whole, not decomposition of the parts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Independent scaling profiles
&lt;/h3&gt;

&lt;p&gt;The standard argument: if one component needs more scale than others, separating it avoids scaling everything unnecessarily.&lt;/p&gt;

&lt;p&gt;The better answer: the cost of splitting a single component out of an otherwise coherent domain action is large, fixed, and permanent — as the table above makes clear. The question is not only "does this component need more scale?" but "does the benefit of isolating its scale exceed the full coordination cost of the split?" In most cases it does not, because the component that appears to need independent scaling is rarely the actual bottleneck under measurement, and because scaling the whole is cheaper than the industry assumes. If there is no compelling reason not to scale everything, scale everything. Simplicity requires a reason to abandon it, not a reason to adopt it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Organisational boundaries
&lt;/h3&gt;

&lt;p&gt;The standard argument: Conway's Law — systems tend to mirror the communication structures of the organisations that build them. If teams are separated, align the architecture accordingly.&lt;/p&gt;

&lt;p&gt;Conway's Law is a useful observation in retrospect. It describes what tends to happen when architecture is not deliberately managed. It is not a prescription, and it should never be used as one. Using it as a justification for a service boundary is encoding organisational structure permanently into the system — and paying the technical cost of that boundary in every sprint, by every developer, for the lifetime of the product.&lt;/p&gt;

&lt;p&gt;The cost of an artificially introduced service boundary compounds over years. The cost of reorganising a team is paid once. The engineering should define the ideal architecture with as few compromises as possible. The organisation should be arranged to serve that architecture, not the other way around. This pays dividends — perhaps not in year one, but reliably by year five, and every year thereafter. Teams that succeed with microservices often do so despite the architecture, through heroic platform investment and operational discipline. The patterns can be made to work. The deeper question is whether they were the right starting point for the domain in front of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Genuinely independent domain concepts
&lt;/h3&gt;

&lt;p&gt;This is the one case where distribution has a legitimate technical argument — and even here, the bar should be high.&lt;/p&gt;

&lt;p&gt;Domain concepts are genuinely independent when they have no transactional relationship with each other. Not merely different in name or ownership, but different in the sense that one completing or failing has no bearing on the integrity of the other. A recommendation engine and a payment processor are genuinely independent. An order and its invoice are not.&lt;/p&gt;

&lt;p&gt;The strongest version of this argument comes from systems with a fundamentally asymmetric workload — a platform where reads vastly outnumber writes, where the read path has no transactional requirement, and where the scale difference between the two is large and proven. A social platform where the overwhelming majority of requests are reads with no transactional requirements is a system where isolating the read path separates two genuinely different kinds of work with different resource profiles and different failure tolerances.&lt;/p&gt;

&lt;p&gt;But this is a workload argument supported by measurement, not an architectural principle applied by default. It applies to a small fraction of the systems that have adopted microservices, and it should be reached by evidence, not anticipated in advance.&lt;/p&gt;




&lt;h2&gt;
  
  
  Three tests before splitting a boundary
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The rollback test.&lt;/strong&gt; If this action fails halfway through, what does recovery cost? If the answer is a database rollback, the action belongs inside a single boundary. If the answer is a coordinated sequence of compensating calls across multiple services, each of which can itself fail, ask whether that coordination cost was consciously accepted — or simply inherited from a pattern that was never examined.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The scaling test.&lt;/strong&gt; Which specific step in this action is the measured bottleneck under current or near-term load? Not the theoretical bottleneck. The step that is demonstrably the constraint today, under real conditions. If the answer is none of them individually, the action does not need decomposition. It needs more instances of the whole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The standup test.&lt;/strong&gt; In the daily standup, what language does the team use? If the items are about services, pipelines, brokers, schemas, and migrations — the team is working on accidental complexity. If the items are about domain concepts — what an order means, who owns a responsibility, what a rule actually requires — the team is working on the right problems. You do not need a cost model to apply this test. You need one conversation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring it in a system you already have
&lt;/h2&gt;

&lt;p&gt;If these tests apply prospectively, they also apply to systems already in production. A short audit reveals more than any architecture review.&lt;/p&gt;

&lt;p&gt;Count the sagas. How many business capabilities require a saga or orchestrator to complete? Each one is a boundary crossing that converted a rollback into a coordination problem. The number tells you how much of the domain is currently in transit.&lt;/p&gt;

&lt;p&gt;Measure the standup ratio. Over two weeks, track how many standup items are about infrastructure, services, pipelines, and schemas versus domain concepts, rules, and business questions. The ratio is a direct reading of how much of the team's daily energy is absorbed by accidental complexity.&lt;/p&gt;

&lt;p&gt;Trace a failure end to end. Pick a recent production incident. Count the number of log streams, services, and correlation IDs required to reconstruct what happened. That reconstruction cost — in time, in tooling, in expertise — is paid on every incident. It is the maintenance tax of the boundary choices made at design time.&lt;/p&gt;

&lt;p&gt;Apply the migration heuristic. A well-modelled monolith can be split later, when measurement proves a specific boundary is warranted. A distributed system can rarely be reassembled cheaply once the boundaries have fossilised into contracts, event schemas, and separate team ownership. Optionality has value. The simpler starting point preserves it. The complex starting point spends it immediately, in exchange for flexibility that may never be needed.&lt;/p&gt;




&lt;h2&gt;
  
  
  First principles
&lt;/h2&gt;

&lt;p&gt;There is nothing novel in the argument this article makes. It is an application of principles that engineering has held for as long as engineering has existed.&lt;/p&gt;

&lt;p&gt;Minimise the moving parts. Every component that can fail will eventually fail. Every interface between components is a surface for misunderstanding, for version drift, for timing errors that only appear under conditions nobody anticipated. The system with fewer moving parts is not the primitive system — it is the disciplined one.&lt;/p&gt;

&lt;p&gt;Solve the problem in front of you. The system that is over-engineered for scale it has not reached, for distribution it does not need, for independence that its domain does not have — that system is not prepared for the future. It is burdened by it. It is paying, today and every day, for problems it may never have.&lt;/p&gt;

&lt;p&gt;Prefer reversibility. The decision that can be undone when it proves wrong is worth more than the decision that cannot, regardless of how confident you are at the time. A monolith that can be split later, when the evidence demands it, is a better starting point than a distributed system that cannot be reassembled after the evidence proves the split was premature.&lt;/p&gt;

&lt;p&gt;Measure before you commit. The incomparability problem — the fact that the alternative architecture was never built, so its cost can never be directly compared — cannot be fully solved. But its worst effects can be mitigated by demanding evidence before committing to complexity: evidence of the scaling requirement, evidence of the domain independence, evidence that the coordination cost is worth the benefit it buys.&lt;/p&gt;

&lt;p&gt;The software industry has a habit of adopting solutions before fully understanding the problems they were designed to solve, and then normalising the cost of those solutions until the cost becomes invisible. The distributed systems patterns that dominate today were developed by organisations with genuine physical distribution requirements, at a scale that a small fraction of systems ever reach. They solved real problems. They are also expensive, complex, and failure-prone in ways that compound over time and rarely appear on the original architectural diagram.&lt;/p&gt;

&lt;p&gt;The question to ask, before any architectural decision, is not "how do others solve this?" It is "what does this problem actually require?" Start from first principles. Follow the cost. Build the simplest thing that genuinely solves the problem in front of you. Treat every boundary crossing — every point where a database rollback becomes a distributed coordination problem — as a commitment with a known, permanent price tag.&lt;/p&gt;

&lt;p&gt;Because it will cost exactly that. Invisibly, continuously, and for as long as the system runs.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>microservices</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>AI can build anything except an understanding of what you are building</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Mon, 04 May 2026 08:28:51 +0000</pubDate>
      <link>https://dev.to/leonpennings/ai-can-build-anything-except-an-understanding-of-what-you-are-building-30le</link>
      <guid>https://dev.to/leonpennings/ai-can-build-anything-except-an-understanding-of-what-you-are-building-30le</guid>
      <description>&lt;p&gt;There is a distinction in software development that the industry has spent twenty years pretending doesn't exist. It is the distinction between building software and understanding what you are building. The first is implementation. The second is engineering. They are not the same thing, they do not require the same skills, and conflating them is the single most expensive mistake a development organisation can make.&lt;/p&gt;

&lt;p&gt;The mistake is now being turbocharged by AI. But to understand why, you first need to understand what was already broken.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part One: The 85% Nobody Talks About
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Two Kinds of Work
&lt;/h3&gt;

&lt;p&gt;Ask most developers how long it takes to build a feature and they will give you an implementation estimate. How long to write the code, wire up the endpoints, get the tests green. That estimate — the part where fingers meet keyboard — accounts for roughly 10 to 15 percent of what building good software actually requires.&lt;/p&gt;

&lt;p&gt;The other 85 to 90 percent is structuring. Understanding what the system is. Identifying where things belong, not just where they are needed. Naming the concepts correctly. Finding the natural boundaries in the domain. Modelling the business so that the code expresses it rather than merely approximating it.&lt;/p&gt;

&lt;p&gt;This is the work that determines whether a system is maintainable in year five, extensible in year seven, or being quietly replaced shortly after.&lt;/p&gt;

&lt;p&gt;Most systems are being replaced by year seven. The 85% was skipped.&lt;/p&gt;

&lt;h3&gt;
  
  
  Three Approaches, One Honest Assessment
&lt;/h3&gt;

&lt;p&gt;There are essentially three ways to approach building a system, and only one of them qualifies as engineering.&lt;/p&gt;

&lt;p&gt;The first is upfront design. You model the domain completely before writing code. The risk is rigidity — the model is fixed before the code has had a chance to reveal its gaps. Reality has a way of not fitting the diagram.&lt;/p&gt;

&lt;p&gt;The second is evolutionary modelling. You begin with a hypothesis about the domain and use code as a feedback instrument. The model and the implementation refine each other continuously. An hour into implementation the starting model may have changed dramatically — a new concept discovered, a responsibility reassigned, a boundary redrawn. That is not failure. That is the process working. The model remains the authority throughout, but it is a living authority — responsive and correctable, never frozen.&lt;/p&gt;

&lt;p&gt;The third approach is template filling. You select a framework. You receive a user story, which functions as a work order. You find the place in the template where this kind of story goes. You implement it there. You close the story.&lt;/p&gt;

&lt;p&gt;There is no model in this process. There is no conceptual centre. The framework is the authority, and the code documents what the framework was configured to do. Frameworks turned engineers into assembly line workers, and the Singleton Paradox — the impossibility of comparing the system that was built using approach A against the system using approach B that was never built — hid the cost. This is not a different kind of design. It is the absence of design, wearing design's clothes.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Model as Construction Tool, Discovery Tool, and Filter
&lt;/h3&gt;

&lt;p&gt;The perception is that domain modelling is slow — that it delays visible output while the team thinks instead of ships. The reality is the opposite.&lt;/p&gt;

&lt;p&gt;A domain modelling session is twenty to thirty minutes at a whiteboard, followed by code that shapes the actual business interactions. This is not a prototype or a spike. It is production code — the domain coming into existence, business logic finding its natural form. By the end of the first day there is working code that expresses what the business does. The template developer, meanwhile, is configuring YAML, wiring injections, setting up repositories. The motion looks productive. Not a line of it describes the business.&lt;/p&gt;

&lt;p&gt;This is the construction side of what a domain model does. But it has two further functions that are equally important.&lt;/p&gt;

&lt;p&gt;It is a discovery tool. When implementation is hard — when a concept resists being placed, when a responsibility has no natural home — that difficulty is information. The model is telling you something is missing, or something is wrong. A trial-and-error developer experiences this friction as a local problem to solve locally. A modelling developer experiences it as the domain asking to be understood more precisely. The response is not a workaround. It is a model refinement.&lt;/p&gt;

&lt;p&gt;It is also a filter. If a behaviour cannot be fitted naturally into the model — if no object has a clear reason to own it, if it contradicts what the model already captures — that resistance is a signal. Either the model needs a new concept, or the behaviour itself does not belong in the system. The model's inability to absorb something cleanly is not a failure of the model. It is the model doing its job, filtering out accidental complexity dressed as a requirement. If you cannot fit behaviour into the model, you probably do not need it.&lt;/p&gt;

&lt;p&gt;A domain model is simultaneously the thing you build with, the instrument that tells you what you are missing, and the filter that tells you what does not belong. The industry has largely stopped building them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Essential Complexity vs Accidental Complexity in Code
&lt;/h3&gt;

&lt;p&gt;The essential/accidental distinction from Fred Brooks is not just an architectural principle. It applies at the level of every object, every responsibility, every line of code — and getting it right at that level is what separates systems that age well from systems that don't.&lt;/p&gt;

&lt;p&gt;Consider a practical example. When building a system that communicates with external services, the essential complexity is what those communications are — what a request contains, what posting requires, what the business needs to express. The accidental complexity is how those communications happen — the transport protocol, the connection handling, the session management, the specific library in use this year.&lt;/p&gt;

&lt;p&gt;Model the responsibilities first. A client object owns the mechanics of communication. A request abstraction defines what communication content looks like. A posting variant adds what posting specifically requires. These are modelled as business responsibilities, technology agnostic. The how — whether the underlying transport is HTTP, MQ, or a database — sits entirely behind those responsibilities, invisible to everything that depends on them.&lt;/p&gt;

&lt;p&gt;The consequence is significant. The technology can change completely — from web service to message queue to direct database write — without touching a single line of the business logic that constructs and uses those requests. The essential complexity is stable. The accidental complexity is genuinely replaceable.&lt;/p&gt;

&lt;p&gt;This is not an interface trick. It is what happens when you model responsibility first and let technology serve the model, rather than letting technology shape what responsibilities are possible. The difference only becomes visible when the technology needs to change — which it always does, eventually. At that point, a system where accidental complexity was kept genuinely separate from essential complexity absorbs the change quietly. A system where the framework grew roots into the business logic requires the business logic to change when the framework changes. The technology that was supposed to serve the domain ends up constraining it instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  The User Story as Work Order
&lt;/h3&gt;

&lt;p&gt;Something specific happened to the user story as agile methodology was industrialised. It began as an invitation — a prompt to have a conversation with a domain expert, to understand a piece of the business well enough to model it. It became a specification. Then a work order. Then a checkbox.&lt;/p&gt;

&lt;p&gt;In its current form the user story arrives at the developer already closed. The conversation with the domain expert happened upstream, in refinement, in planning, in the product owner's head. The developer receives a summary and works from that. The question the developer asks is not "what is this telling me about the domain" but "where in the template does this go."&lt;/p&gt;

&lt;p&gt;The diagnosis is visible in what developers say when asked where the hard part of a system is. A template developer describes framework complexity — which abstraction to use, which pattern applies, how to configure the integration. A modelling developer describes domain complexity — what the business is actually doing here, what concept is missing, what existing object is being asked to carry weight it was not designed for.&lt;/p&gt;

&lt;p&gt;These are not the same question. They do not produce the same system. And over seven years, the difference between the systems they produce is not marginal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where Things Belong vs Where They Are Needed
&lt;/h3&gt;

&lt;p&gt;The most consequential difference between template filling and domain modelling is not visible in the first sprint. It becomes visible in maintenance, and it compounds with every passing year.&lt;/p&gt;

&lt;p&gt;A template developer fixes problems where they occur. A table misbehaves on page B, so page B gets adjusted. The fix works. The story is closed. What is not visible is what has just happened structurally: page B now owns part of the table's behaviour. The table behaves one way on page A and another way on page B, and both pages carry part of the responsibility for what the table does. The next developer to touch either page must understand both. Maintenance has doubled, invisibly, for that one component.&lt;/p&gt;

&lt;p&gt;A modelling developer asks a different question: what owns this behaviour? The answer is the table itself. The table owns its own presentation. The page owns its usage of the table. A fix to the table propagates everywhere the table is used, because behaviour lives in the component, not in the pages that consume it.&lt;/p&gt;

&lt;p&gt;This is not an aesthetic preference. It is the mechanical difference between maintenance costs that stay flat and maintenance costs that compound.&lt;/p&gt;

&lt;p&gt;Multiply this pattern across a codebase over five years and you have the prototype in production — a system held together with toothpicks, paperclips, and glue, where every workaround is load-bearing and every change requires understanding not what the system is, but what it has become.&lt;/p&gt;

&lt;p&gt;The difference between fixing the problem where it occurs and fixing it where it belongs is the difference between prototype code and production code. At scale, it is the difference between a system that costs the same to maintain in year seven as it did in year one, and a system that is already being rewritten.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Contradiction Problem
&lt;/h3&gt;

&lt;p&gt;There is a specific consequence of building without a domain model that becomes critical at scale. It is underappreciated, and AI makes it significantly worse.&lt;/p&gt;

&lt;p&gt;A domain model is not just a design preference. It is a contradiction-detection mechanism.&lt;/p&gt;

&lt;p&gt;When business logic has a conceptual centre — a well-named domain object that owns its own behaviour — contradicting rules become visible. If two requirements make incompatible demands on the same object, you encounter the conflict when you try to model it. The structure surfaces the problem before it reaches production.&lt;/p&gt;

&lt;p&gt;When business logic is scattered — across service methods, event handlers, configuration files — contradictions are invisible until they collide in production. Two requirements can contradict each other completely and coexist undetected for months, because there is no common reference point that would make the conflict visible. The system implements both rules, resolves the conflict arbitrarily at runtime, and produces behaviour that nobody designed and nobody can explain.&lt;/p&gt;

&lt;p&gt;CQRS, microservices, and event-driven architecture were proposed, in part, as responses to the complexity that accumulates without a domain model. The tragedy is that they add architectural elaboration without supplying the missing conceptual centre. They do not make contradictions visible. They distribute logic across more moving parts, which makes contradictions harder to see, not easier. The problem is obscured by the solution.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part Two: AI Became the Framework
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Same Pattern, Faster
&lt;/h3&gt;

&lt;p&gt;Which brings us to the present moment, and to the claim that AI is transforming software development.&lt;/p&gt;

&lt;p&gt;It is. But not in the way most of the conversation assumes.&lt;/p&gt;

&lt;p&gt;There are two ways to think about AI-assisted development, and they map precisely onto the distinction between design and template filling established in part one.&lt;/p&gt;

&lt;p&gt;AI is revolutionary in the sense that you conceive what you want, express it, and something builds it. The implementation barrier has been dramatically lowered. Code that would have taken days takes minutes. This is real and significant.&lt;/p&gt;

&lt;p&gt;But AI-assisted development is also pure template filling. You are not modelling. You are instructing. The output is code that documents what the prompt said, with AI as the framework. The assembly is faster, the templates are more flexible, the results are more immediately impressive. The absence of a modelling process is identical.&lt;/p&gt;

&lt;p&gt;And it inherits both failure modes simultaneously.&lt;/p&gt;

&lt;p&gt;From upfront design, it inherits rigidity at the point of prompting. The model — such as it is — is fixed in the prompt. The code cannot talk back, because you are not in dialogue with it. You are receiving output. The feedback loop that makes evolutionary modelling work — where implementation friction becomes structural insight — is broken. The AI absorbs the friction. You never feel it. You never learn from it.&lt;/p&gt;

&lt;p&gt;From template filling, it inherits the absence of a conceptual centre. The logic lives in the prompts, scattered and unreconciled, exactly as it lived in the fat services and event handlers before it. Except now it is even less visible, because a service class at least had a name and a location in a codebase. A prompt has neither.&lt;/p&gt;

&lt;p&gt;The framework abstracted the developer from the infrastructure. AI abstracts the developer from the code. Each layer of abstraction makes "it works" faster to achieve and the absence of a domain model harder to see.&lt;/p&gt;

&lt;p&gt;What the industry is currently calling "AI produces spaghetti" is not a new problem. It is framework templating amplified. The spaghetti was already there. AI makes it faster to produce, more voluminous, and more convincing — because it arrives in clean syntax with passing tests. The structural absence underneath looks better than ever.&lt;/p&gt;

&lt;p&gt;AI did not replace the framework. AI became the framework. And it inherited the same problem the framework always had — it can build anything except an understanding of what you are building.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Maintenance Proposition Does Not Hold
&lt;/h3&gt;

&lt;p&gt;The proposition being made for AI-assisted maintenance is that rewrites are now cheap, so structural problems do not accumulate the same way. This deserves examination.&lt;/p&gt;

&lt;p&gt;A rewrite can reproduce the syntax of a system faster than ever before. What it cannot do is verify that the rewrite is correct in the only sense that matters for a business system — that it accurately represents what the business actually does. Correctness here is not syntactic. It is semantic. It requires a reference against which to check the implementation.&lt;/p&gt;

&lt;p&gt;The reference is the domain model. And the domain model is exactly what was never built.&lt;/p&gt;

&lt;p&gt;So the rewrite, however fast, produces new code that implements the same contradictions, the same scattered logic, the same implicit assumptions. It is not a fix. It is a reprint. The toothpicks are replaced with newer toothpicks. The paperclips are shinier. The structure is identical.&lt;/p&gt;

&lt;p&gt;Consider the contradiction problem at scale. Two prompts with conflicting business logic — you will probably spot it. Twenty — possibly. Eighty — almost certainly not. There is no structure that makes the contradiction visible. A rewrite from those eighty prompts does not resolve the contradiction. It reproduces it in fresh syntax. And in another cycle, the same conversation about rewriting will begin again, for the same undiagnosed reasons.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Disappears and What Doesn't
&lt;/h3&gt;

&lt;p&gt;Frameworks will likely disappear, and probably sooner than the industry expects. Hibernate exists because writing database session management by hand is tedious and error-prone for humans. AI has no such limitation. It can write the queries, manage the sessions, handle the mapping — contextually, specifically, without a generic abstraction layer designed for every possible use case. The framework was a productivity tool for human limitations. As those limitations are removed, the justification for the framework dissolves. This is not a loss. Frameworks were always accidental complexity — complexity introduced by tools rather than by the problem itself.&lt;/p&gt;

&lt;p&gt;But the domain model does not disappear with the framework. It becomes more critical. Because the framework, for all its costs, at least imposed some structure. Generic, clumsy, domain-agnostic structure — but structure nonetheless. Without it, and without a domain model, the only thing standing between a system and total architectural entropy is the conceptual model in the developer's head.&lt;/p&gt;

&lt;p&gt;Or its absence.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Skill That Cannot Be Prompted
&lt;/h3&gt;

&lt;p&gt;The ability to model a domain — to hold a structural representation in your head, refine it through implementation, and express it in code that means something beyond its own execution — does not appear to be a skill that AI can supply or that prompting can replicate.&lt;/p&gt;

&lt;p&gt;It appears to correlate with a specific kind of spatial reasoning: the ability to see a three-dimensional object from its two-dimensional components, to hold structure in the mind and manipulate it without losing the whole. Developers who have this skill behave differently when they encounter implementation friction. Where a template developer sees a local problem to solve locally — a fix applied where the problem occurs rather than where it belongs — a modelling developer sees structural information. The friction is the domain asking to be understood more precisely. The response is not a workaround. It is a model refinement.&lt;/p&gt;

&lt;p&gt;You cannot prompt your way to that response. The prompt eliminates the friction. And the friction was the signal.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Only Honest Measure
&lt;/h2&gt;

&lt;p&gt;There is a simple diagnostic for whether a system was built or merely assembled. Apply it after seven years.&lt;/p&gt;

&lt;p&gt;Is maintenance getting cheaper or more expensive? A well-modelled system gets cheaper — the model matures, the team internalises it, changes become faster as understanding deepens. A template-filled system gets more expensive, as accidental complexity compounds and each change must navigate the accumulated residue of earlier decisions made without a model.&lt;/p&gt;

&lt;p&gt;Are new requirements getting faster or slower to absorb? A well-modelled domain accelerates — each addition deepens understanding and reveals where the next extension naturally fits. A system without a conceptual centre slows — each requirement negotiates with the existing tangle rather than extending a coherent structure.&lt;/p&gt;

&lt;p&gt;Has the rewrite conversation started?&lt;/p&gt;

&lt;p&gt;The rewrite is not a sign of business ambition or technical progress. It is the bill arriving for the 85% that was skipped. And it will reproduce the conditions that made it necessary, because the organisation never learned what actually went wrong. The diagnosis will be "technical debt" or "legacy architecture." Rarely will it be accurate: no domain model was ever built, and without one, the rewrite begins the same accumulation from sprint one.&lt;/p&gt;

&lt;p&gt;AI makes none of this cheaper in the long run. It makes the first two years cheaper and the subsequent five more expensive, because the prototype is produced faster and looks more convincing, and the discovery that it is a prototype comes later and costs more.&lt;/p&gt;

&lt;p&gt;The 85% cannot be prompted. It cannot be templated. It cannot be abstracted away by a sufficiently powerful framework, however intelligent that framework becomes.&lt;/p&gt;

&lt;p&gt;It requires understanding what you are building.&lt;/p&gt;

&lt;p&gt;That has always been the hard part. It remains the hard part. And the industry's increasing sophistication at avoiding it is not progress.&lt;/p&gt;

&lt;p&gt;It is a more expensive way of arriving at the same rewrite conversation, on roughly the same schedule, having learned roughly the same nothing.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>java</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to Test Whether Your Software Solution Actually Fits The Problem</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 28 Apr 2026 06:04:11 +0000</pubDate>
      <link>https://dev.to/leonpennings/how-to-test-whether-your-software-solution-actually-fits-the-problem-c85</link>
      <guid>https://dev.to/leonpennings/how-to-test-whether-your-software-solution-actually-fits-the-problem-c85</guid>
      <description>&lt;p&gt;Every application is built once.&lt;/p&gt;

&lt;p&gt;There is no second version of the same system, built with different architectural assumptions, run in parallel for a decade, and then compared on maintenance cost, team size, and requirement absorption speed. The alternative is never built. The counterfactual never exists. This is the Singleton Paradox applied to software: because each system is unique, there is no external reference point against which to judge whether it is a good solution to its problem — or merely the only solution anyone bothered to build.&lt;/p&gt;

&lt;p&gt;This matters more than it might appear. It means that the quality of an architectural decision can never be measured by comparison. You cannot park the well-modeled system next to the poorly-modeled one and read off the difference. The poorly-modeled system is the only one that exists. So when it becomes expensive to maintain, slow to change, and eventually impossible to extend, those outcomes get attributed to the problem — the domain was complex, the requirements changed, the business grew — rather than to the solution. The solution is never put on trial, because there is nothing to try it against.&lt;/p&gt;

&lt;p&gt;The Singleton Paradox does not just make good architecture hard to prove. It makes bad architecture hard to see. The absence of contrast is not neutral. It actively shapes what gets treated as normal. Rising maintenance costs are normal. Growing teams are normal. Slowing feature velocity is normal. Rewrites every seven to ten years are normal. None of this is normal in the sense of being inevitable. All of it is normal in the sense of being what happens when accidental complexity (Fred Brooks' term for the complexity introduced by tools and decisions rather than by the problem itself) compounds over time, and when there is no alternative visible to suggest it could be otherwise.&lt;/p&gt;

&lt;p&gt;This creates a specific and solvable problem. If external comparison is unavailable, the only honest measure of whether a system is a good fit for its problem is internal. Not how it compares to another system that was never built, but how it behaves against time. Does it get easier or harder to operate? Does it get cheaper or more expensive to change? Does it remain stable as the domain evolves, or does it accumulate fragility with each passing year?&lt;/p&gt;

&lt;p&gt;Those questions have answers. And the answers, taken together, constitute the only reliable verdict on whether the solution fit the problem.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Ten-Year Cost Test
&lt;/h2&gt;

&lt;p&gt;That internal measure can be made concrete. The Ten-Year Cost Test is a diagnostic any organisation can apply to its own systems — not a comparison against an alternative that was never built, but a set of questions about whether the current architecture is winning or losing against time. The threshold of ten years is not arbitrary. A system that cannot survive a decade without a rewrite has not been maintained; it has been replaced. And replacement, however it gets framed, is the system announcing that it was not a good fit for the problem it was built to solve.&lt;/p&gt;

&lt;p&gt;The test is simple. After ten years in production, a well-designed system should satisfy all of the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintenance cost is the same or lower than year one.&lt;/strong&gt; As the domain model matures and the team's understanding deepens, maintenance should become cheaper, not more expensive. The team knows where everything lives. The rules are explicit and localised. A change that took two days in year one should take two hours in year ten, because the model has been refined and the team has internalised it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New requirements are absorbed faster as the system matures.&lt;/strong&gt; A well-modeled domain does not merely keep pace with new understanding — it accelerates. Each addition deepens the team's knowledge of the model and reveals where the next extension naturally fits. When the business learns something new — a new product type, a new regulatory constraint, a new class of customer — the model should be able to absorb it with decreasing effort over time, not constant effort. If absorption speed is flat, the domain model is adequate but not right. If it slows, the model is failing. A well-modeled system gets easier to extend the longer it has been understood.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The team size required to maintain it has not grown significantly.&lt;/strong&gt; This is perhaps the most honest measure of architectural health. A system that requires more people every year to maintain the same functionality is a system where accidental complexity is compounding. Each new developer adds coordination overhead. Each new layer of abstraction requires more people to understand it. A well-modeled system with low accidental complexity should be maintainable by a small, stable team indefinitely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The application is as stable or more stable than it was initially.&lt;/strong&gt; Stability should increase over time as the model matures and edge cases are understood and handled. If the system becomes less stable over time — more incidents, more unexpected interactions, more fragile integrations — accidental complexity is winning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The cost of running it has not grown faster than the business it serves.&lt;/strong&gt; Infrastructure costs, operational overhead, and support burden should scale with business growth, not with architectural entropy. A system that costs significantly more to run in year ten than it did in year one, while serving the same number of users, has a structural problem.&lt;/p&gt;

&lt;p&gt;Apply this test honestly to any system you have worked on for more than five years. The results are rarely comfortable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Industry Data Actually Shows
&lt;/h2&gt;

&lt;p&gt;Before examining how the average project scores on this test, an important caveat is necessary. Rigorous longitudinal data comparing domain-first versus framework-first approaches over ten-year periods essentially does not exist in published form. The industry does not measure what it should measure. Deployment frequency, recovery time, and project delivery success rates are tracked. Total cost of ownership relative to architectural approach over a decade is not.&lt;/p&gt;

&lt;p&gt;This absence is itself the Singleton Paradox operating at industry scale. Nobody ran the controlled experiment. Nobody built both versions of the same system and compared them over ten years. So the precise cost differential between approaches is genuinely unknown in the scientific sense — even though the directional evidence is consistent and substantial.&lt;/p&gt;

&lt;p&gt;What does exist:&lt;/p&gt;

&lt;p&gt;The CISQ estimated in 2022 that poor software quality costs US organisations approximately $2.41 trillion annually, with a significant portion attributable to accumulated technical debt. The direction of travel is clear even if the precise attribution to architectural choices is not.&lt;/p&gt;

&lt;p&gt;The Standish Group CHAOS Report has tracked project success rates for decades. Despite continuous evolution of methodology — agile, DevOps, cloud-native — the underlying success rates have not dramatically improved. This implies the problem is structural rather than methodological. Better processes applied to the wrong architecture produce better-managed failure, not success.&lt;/p&gt;

&lt;p&gt;The DORA research — Google's annual State of DevOps reports, now covering over 39,000 professionals — shows a persistently bimodal distribution. The 2024 report found that elite performing teams have change failure rates around 5% and recover from incidents in under an hour. Low performing teams have significantly higher failure rates and recovery times measured in days or weeks. Only 19% of organisations reached elite performance. The low performance cluster, meanwhile, grew from 17% to 25% of respondents between 2023 and 2024. The distribution is not a bell curve. It is two distinct populations. Architecture and approach appear to be the differentiating variable, not team size, budget, or industry.&lt;/p&gt;

&lt;p&gt;Amazon Prime Video published a case study in 2023 describing a 90% infrastructure cost reduction after consolidating a distributed microservices monitoring service into a single process — a result specific to that service, not a platform-wide architectural overhaul, but instructive precisely because the team at Amazon chose to be candid about it. Segment, a data platform company, published a similar account. These are self-selected — organisations that consolidated and saved money are more likely to publish than those that saw no benefit — but they are directionally consistent with the argument being made here.&lt;/p&gt;

&lt;p&gt;A McKinsey and University of Oxford study of more than 5,400 IT projects — conducted in 2012 and still the most comprehensive published dataset of its kind — found that large IT transformation projects run on average 45% over budget, 7% over time, and deliver 56% less value than predicted. That is first delivery. The trajectory over the subsequent decade is harder to find in rigorous published form — which is itself telling.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scoring the Average Project
&lt;/h2&gt;

&lt;p&gt;With that context, here is an honest assessment of how the average project scores on each dimension of the Ten-Year Cost Test. These are not precise figures — the data does not support precision — but they represent the consistent direction of the evidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintenance cost rises significantly on the average project.&lt;/strong&gt; Industry estimates consistently place maintenance at 60–80% of total software lifecycle cost, and that proportion grows over time rather than shrinking. On framework-first systems, the annual upgrade cycle alone — broken dependencies, reworked configuration, revalidated integrations — consumes engineering capacity that produces zero business value. In the worst cases, maintenance costs grow 800% or more over a decade, eventually triggering a rewrite. In the best cases — domain-first systems with low accidental complexity — maintenance costs stay flat or fall as the model matures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirement absorption speed slows materially on the average project.&lt;/strong&gt; In a well-modeled system, new requirements should get faster to implement over time — not slower — as the team's understanding deepens and the model reveals where each extension naturally fits. On the average project, the opposite happens. What starts as a two-week feature becomes a two-month project by year five, as each new requirement must navigate accumulated accidental complexity. In distributed systems, a single business rule change triggers API contract renegotiation, versioning decisions, cross-team coordination, and staged deployments. In the worst cases, the system effectively stops absorbing new requirements — every change becomes a major project and the business routes around the software rather than through it. In the best cases, requirement absorption accelerates as the model matures. Flat speed is a warning sign. Slowing speed is a verdict.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team size grows on the average project.&lt;/strong&gt; Industry observation consistently shows teams of two to three times the original size by year ten, maintaining the same functional scope. In the worst cases — full microservices architectures with dedicated platform, SRE, and DevOps functions — the team exists primarily to manage its own infrastructure rather than to serve the business. In the best cases, the team stays small and stable. Three developers. Five hundred domain objects. Fifteen years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stability declines on the average project.&lt;/strong&gt; DORA data shows that low-performing teams — the majority — have change failure rates approaching fifty percent and recovery times measured in weeks. Production increasingly becomes the final validation environment because the integrated system only meets real conditions there. In the worst cases, the organisation develops a chronic incident culture where production instability is treated as a fact of life rather than an architectural signal. In the best cases, stability improves over time as the model matures and edge cases are properly handled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running costs grow faster than business value on the average project.&lt;/strong&gt; The shift to cloud computing made infrastructure costs more visible but did not reduce them. Microservices architectures run fifty to two hundred containers where a monolith needs three to five, with corresponding cost differentials. In the worst cases, infrastructure cost grows an order of magnitude while business capability grows modestly. In the best cases, running costs remain proportional to business growth throughout the system's life.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The rewrite conversation starts on the average project around year seven.&lt;/strong&gt; In the worst cases, the conversation starts at year three or four — the system has already become unmaintainable before it is fully understood. In the best cases, the conversation never happens. The system absorbs new requirements, accommodates new technology at its boundaries, and continues to serve the business indefinitely.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Inverse Is Also True
&lt;/h2&gt;

&lt;p&gt;The data does not merely show that the average project fails the Ten-Year Cost Test. It shows that failure is the expected outcome — so expected that the industry has stopped treating it as failure.&lt;/p&gt;

&lt;p&gt;Rising maintenance costs are attributed to business complexity rather than architectural choices. Growing teams are treated as evidence of business success rather than architectural inefficiency. Slowing requirements are explained by changing priorities rather than accumulated accidental complexity. Declining stability is managed with better monitoring rather than addressed at its source. The rewrite conversation is framed as modernisation rather than recognised as the bill arriving for choices made before the domain was understood.&lt;/p&gt;

&lt;p&gt;This normalisation is the most dangerous consequence of the Singleton Paradox operating at industry scale. When everyone is paying the same inflated price, the inflated price becomes the reference point. The cost of accidental complexity is not visible as a cost. It is visible as &lt;em&gt;the cost of software&lt;/em&gt; — the natural, inevitable, irreducible price of building systems.&lt;/p&gt;

&lt;p&gt;It is not natural. It is not inevitable. It is not irreducible.&lt;/p&gt;

&lt;p&gt;It is the compound interest on a specific set of choices, made consistently, across the industry, before domains are understood. Choices that look like engineering because everyone makes them. Choices that the Singleton Paradox ensures will never be clearly falsified, because the alternative is never built.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Rewrite as the Final Verdict
&lt;/h2&gt;

&lt;p&gt;There is one more signal worth examining. It requires no data, no research, no longitudinal study. It is available in almost every organisation that has been running software for more than a decade.&lt;/p&gt;

&lt;p&gt;The rewrite conversation.&lt;/p&gt;

&lt;p&gt;When someone in your organisation argues that the current system cannot support where the business is going — that it needs to be modernised, migrated, rebuilt on a new platform — that system has already announced its verdict on the Ten-Year Cost Test. The rewrite is not a sign of business ambition. It is the bill arriving.&lt;/p&gt;

&lt;p&gt;The tragedy of the rewrite is not its cost, though the cost is substantial — typically measured in millions and years. The tragedy is what happens after. The new system almost always makes the same choices. The same framework is selected before the domain is understood. The same patterns are applied before the business concepts are named. The same accidental complexity is introduced in the first sprint and compounds through the same lifecycle.&lt;/p&gt;

&lt;p&gt;Because the Singleton Paradox means the organisation never learned from the previous system what actually went wrong. The previous system ran in production. The pipeline was green. The architecture was recognised. The failure was economic and temporal — too slow, too expensive, too fragile to change — not functional. And economic, temporal failure is invisible until it isn't. By the time the rewrite conversation starts, the diagnosis is usually "technical debt" or "legacy architecture" or "we outgrew it." Rarely is the diagnosis accurate: accidental complexity was introduced before the domain was understood, and it compounded for seven years.&lt;/p&gt;

&lt;p&gt;So the rewrite reproduces the conditions that made the rewrite necessary. And in another seven to ten years, the conversation starts again.&lt;/p&gt;

&lt;p&gt;A well-modeled system does not generate the rewrite conversation. Not because it is perfect, or because requirements don't change, or because technology doesn't evolve. But because the essential complexity — the domain model — is separable from the accidental concerns around it. Frameworks can be replaced without touching the domain. Infrastructure can evolve without restructuring the business logic. The system adapts because its core is stable, and its core is stable because it correctly reflects the domain rather than the technology choices of the year it was built.&lt;/p&gt;

&lt;p&gt;The Ten-Year Cost Test can be applied to any system. And the rewrite conversation, or its absence, is the most honest result that test can produce.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Uncomfortable Conclusion
&lt;/h2&gt;

&lt;p&gt;The Singleton Paradox means the direct proof will always be unavailable. You cannot park the well-architected system next to the poorly-architected one and read off the difference, because only one of them was ever built. You cannot compare the fifteen-year maintenance cost of a domain-first system against a framework-first system because the framework-first system is the only one that exists.&lt;/p&gt;

&lt;p&gt;What you can do is apply the Ten-Year Cost Test to what you have. Ask honestly whether maintenance is getting cheaper or more expensive. Whether new requirements are getting faster or slower to absorb. Whether the team is staying small or growing to manage complexity. Whether the system is getting more stable or less. Whether running costs are proportional to business growth or running ahead of it.&lt;/p&gt;

&lt;p&gt;And ask whether the rewrite conversation has started.&lt;/p&gt;

&lt;p&gt;The industry data — imprecise as it is, incomplete as it necessarily must be — points consistently in one direction. The average project fails all five dimensions of the test. Maintenance rises. Requirements slow. Teams grow. Stability declines. Costs outpace business value. The rewrite conversation starts around year seven and reproduces the conditions that made it necessary.&lt;/p&gt;

&lt;p&gt;This has happened so consistently, for so long, that it has been normalised into invisibility. The inflated cost has become the reference point. The compounding expense of accidental complexity has become indistinguishable from the natural cost of building software — because no one in the room has ever seen it otherwise.&lt;/p&gt;

&lt;p&gt;The proof that it can be otherwise exists — in systems maintained by small teams in complex domains, absorbing new requirements cleanly, costing the same to run as they did a decade ago. Those systems exist. They simply never get compared to the alternative, because the alternative was never built.&lt;/p&gt;

&lt;p&gt;The absence of that proof in your organisation is not evidence that it is impossible.&lt;/p&gt;

&lt;p&gt;It is evidence of the Singleton Paradox.&lt;/p&gt;

&lt;p&gt;And the Singleton Paradox is not a law of nature.&lt;/p&gt;

&lt;p&gt;It is a consequence of choices. But not random choices — choices made under a specific kind of pressure that has nothing to do with fit. Spring Boot is chosen because the last project used Spring Boot. CQRS is chosen because the architect gave a conference talk on CQRS. Event-driven architecture is chosen because it is what sophisticated teams are supposed to use. These are not engineering decisions. They are career decisions dressed as engineering decisions. No one got fired for choosing the framework everyone else is using. The choice is defensible precisely because it is popular — and because the Singleton Paradox ensures it will never be tested against the alternative, it remains defensible indefinitely, regardless of what it actually costs.&lt;/p&gt;

&lt;p&gt;This is the root cause the industry rarely names. Not incompetence. Not malice. The systematic selection of solutions on the basis of social safety rather than demonstrated fit — in an environment where demonstrated fit is structurally impossible to measure.&lt;/p&gt;

&lt;p&gt;Choices that can be made differently.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>java</category>
      <category>softwareengineering</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>The Underestimated Power of Encapsulation in Software Engineering</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Mon, 27 Apr 2026 08:46:44 +0000</pubDate>
      <link>https://dev.to/leonpennings/the-underestimated-power-of-encapsulation-in-software-engineering-ff2</link>
      <guid>https://dev.to/leonpennings/the-underestimated-power-of-encapsulation-in-software-engineering-ff2</guid>
      <description>&lt;p&gt;Most Java developers today can explain encapsulation. They will tell you it means making fields private and adding getters and setters. They can recite SOLID principles on demand. They know the vocabulary.&lt;/p&gt;

&lt;p&gt;What most of them have never experienced is what genuine object-oriented design actually feels like in practice — and that is the real problem.&lt;/p&gt;

&lt;p&gt;Object-oriented principles did not disappear because of technology hype or the pace of change. They were never properly learned. A generation of developers was trained on frameworks, not on design. They learned Spring before they understood objects. They learned dependency injection before they understood responsibility. They learned how to make things work before they understood how to structure things well.&lt;/p&gt;

&lt;p&gt;The result is an industry where object-oriented vocabulary is used to justify procedural habits. The Interface Segregation Principle — which is fundamentally about keeping responsibilities separate and coherent — gets applied as a rule for how to slice Spring interfaces. Encapsulation becomes a checkbox: private fields, public getters, done. The deeper meaning, and the profound practical value behind it, is lost entirely.&lt;/p&gt;

&lt;p&gt;What dominates instead is procedural programming in disguise. Fat service classes orchestrate anemic data bags. Logic is scattered across layers. Objects exist to hold data, not to own behavior. The goal is implementation — make it work, ship it — not design. Not structure. Not a system that remains small, simple, robust, and maintainable as it grows.&lt;/p&gt;

&lt;p&gt;This article is about what encapsulation actually means, what it actually does, and why practicing it properly changes both the software you build and the way you think about building it.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Encapsulation Really Means
&lt;/h2&gt;

&lt;p&gt;Encapsulation means that the "how" stays completely inside the object. Clients see only the "what" — the responsibilities the object fulfills. Nothing about implementation, nothing about mechanism, nothing about technology ever surfaces in the public interface.&lt;/p&gt;

&lt;p&gt;Private fields are the minimum. The real discipline is in the public surface of the object. If a method exposes internal data, leaks a storage detail, or forces the caller to know anything about how the object works internally, encapsulation has already failed — regardless of whether the fields are private.&lt;/p&gt;

&lt;p&gt;This extends to the constructor. A constructor that accepts implementation details — a storage mechanism, an external resource, a configurable strategy — is already exposing the "how." The object must own its implementation completely, from the moment it comes into existence.&lt;/p&gt;

&lt;p&gt;A helpful guiding principle is &lt;strong&gt;"Tell, Don't Ask"&lt;/strong&gt;: tell the object what to do. Do not ask it for its data so that you can make decisions with it elsewhere. When you find yourself pulling data out of an object to decide what to do next, that decision almost certainly belongs inside the object itself.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Cognitive Shift: From Technology to Responsibilities
&lt;/h2&gt;

&lt;p&gt;Encapsulation is more than a coding rule. Practiced properly, it becomes a thinking tool that changes how you model systems from the ground up.&lt;/p&gt;

&lt;p&gt;When you commit to hiding the "how," you are forced to think clearly about the "what." Technical questions — how do I store this, which framework handles this, which layer does this belong to — become the wrong questions. They are about implementation, and implementation is not your concern at this level. The right questions are: what is this object responsible for? What should it be able to do? Which other objects would it naturally talk to?&lt;/p&gt;

&lt;p&gt;In a typical Spring application this shift never happens. Developers think in layers — controller, service, repository — and the central question is always "where does this code go?" That question produces a filing system for procedural code. It does not produce a domain model. The objects that emerge from it are empty by design, because the template has already decided that behavior lives in services, not in objects.&lt;/p&gt;

&lt;p&gt;Asking "whose responsibility is this?" produces something entirely different: a coherent network of objects that each own their behavior completely, and that together tell the story of the domain.&lt;/p&gt;




&lt;h2&gt;
  
  
  Example: A Well-Encapsulated Document
&lt;/h2&gt;

&lt;p&gt;Consider a compliance-heavy application where documents — PDFs, scanned forms, certificates — play a central role. They get created, stored, retrieved, and checked for compliance throughout the system.&lt;/p&gt;

&lt;p&gt;The typical Spring-influenced approach treats &lt;code&gt;Document&lt;/code&gt; as a data bag:&lt;/p&gt;

&lt;p&gt;java&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Document&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;filePath&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;    &lt;span class="c1"&gt;// leaks storage details&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;mimeType&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;     &lt;span class="c1"&gt;// exposes raw data&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getFilePath&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;filePath&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;setFilePath&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;filePath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="o"&gt;[]&lt;/span&gt; &lt;span class="nf"&gt;getContent&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;// ... more getters and setters&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not an object. It is a struct with ceremony. Every implementation detail is visible and reachable. Logic that belongs to the Document — storage, compliance checking, content retrieval — lives somewhere else, in a service class, spread across layers, written procedurally. Changing the storage mechanism means hunting through the entire codebase because the entire codebase is coupled to the implementation.&lt;/p&gt;

&lt;p&gt;Now consider a Document that actually owns its responsibilities:&lt;/p&gt;

&lt;p&gt;java&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Document&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;final&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;mimeType&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;Document&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;mimeType&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;InputStream&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;randomUUID&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;mimeType&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mimeType&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="c1"&gt;// becoming a Document includes taking care of its own storage&lt;/span&gt;
        &lt;span class="c1"&gt;// the how is nobody else's business&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;writeToStream&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OutputStream&lt;/span&gt; &lt;span class="n"&gt;outputStream&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// retrieves and writes content — fully internal&lt;/span&gt;
        &lt;span class="c1"&gt;// the caller gets their bytes, nothing more&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="nf"&gt;isCompliant&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// compliance logic lives here, where it belongs&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getName&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getMimeType&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;mimeType&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Document figures out its own storage as part of coming into existence. It knows how to give its content back via &lt;code&gt;writeToStream&lt;/code&gt;. It knows whether it is compliant. No file path is exposed. No byte array leaks out. No storage mechanism is visible to anything outside.&lt;/p&gt;

&lt;p&gt;Usage across the system stays clean and expressive:&lt;/p&gt;

&lt;p&gt;java&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nc"&gt;Document&lt;/span&gt; &lt;span class="n"&gt;invoice&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Document&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"invoice.pdf"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"application/pdf"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;contentStream&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;transaction&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;attach&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;invoice&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;invoice&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;writeToStream&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;responseStream&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;Transaction&lt;/code&gt; knows it can attach a &lt;code&gt;Document&lt;/code&gt;. It does not know — and has no reason to know — how the document stores itself, where it lives, or how it retrieves its content. The Document figures it out. That is the point.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters at Scale
&lt;/h2&gt;

&lt;p&gt;The benefits of this discipline are not always obvious on a small codebase. They become impossible to ignore as the system grows — and they show up most clearly when things need to change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The core logic tells the story of the domain.&lt;/strong&gt; When objects are modeled around responsibilities rather than technical concerns, reading the code means reading the domain. A &lt;code&gt;Transaction&lt;/code&gt; attaches a &lt;code&gt;Document&lt;/code&gt;. A &lt;code&gt;Document&lt;/code&gt; knows whether it is compliant. The objects speak in business terms because they were designed in business terms. There is no framework noise, no layer indirection, no infrastructure vocabulary polluting the domain model. A new developer — or a returning one after six months — can understand what the system does by reading the objects, not by reverse-engineering a tangle of service classes and annotations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framework upgrades become a bounded problem.&lt;/strong&gt; The dominant template in Java development today is well known: logic goes into services, data gets carried by DTOs, persistence is managed by repositories, and domain objects exist mainly to map to database tables. This pattern is taught as architecture. It is actually a prescription for hollowing out the domain. The objects end up empty. The behavior ends up scattered across service classes that have no natural boundary, no clear responsibility, and no reason to stay coherent as the system grows.&lt;/p&gt;

&lt;p&gt;The consequence is that the framework and the domain become inseparable — not because of annotations on classes, but because the logic itself has been relocated into framework-managed components. Services are Spring beans. Transaction boundaries are framework concerns. The business reasoning is hosted inside the framework rather than sitting independently of it. When the framework changes, the logic has to move with it, because the logic lives inside it.&lt;/p&gt;

&lt;p&gt;When domain objects genuinely own their responsibilities, this changes entirely. The core domain is a network of objects talking to each other in business terms, with no knowledge of the framework hosting them. The framework sits at the edges — handling HTTP, managing sessions, coordinating persistence — but it does not host the logic. Upgrading it, replacing it, or restructuring it becomes a bounded problem. The domain does not change because it was never coupled to the framework in the first place.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The five to seven year rebuild cycle is not inevitable.&lt;/strong&gt; Most software organisations accept the full rewrite as a fact of life. After a few years, the codebase has become so entangled with its own technology choices that evolution is no longer possible — the only way forward is to start again. This cycle is expensive, disruptive, and demoralising. It is also, in large part, a consequence of building systems where business logic is hosted inside framework components rather than inside the domain itself.&lt;/p&gt;

&lt;p&gt;When the core logic is a network of objects talking to each other in terms of responsibilities, it does not age the same way. The business rules, the domain relationships, the behavioural contracts between objects — these survive. Technology changes around them. The core endures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural evolution becomes manageable.&lt;/strong&gt; Moving from a monolith to a distributed architecture, extracting a bounded context, splitting a service — these are genuinely difficult problems when business logic is woven through framework plumbing. When domain objects carry no framework baggage and communicate purely through their responsibilities, the same logic can move between architectural boundaries without fundamental redesign. The objects do not care whether they run in one process or ten. Their responsibilities do not change. Their interfaces do not change. The architecture is a deployment concern, not a domain concern.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The less the core depends on frameworks, the longer it survives.&lt;/strong&gt; This is the underlying premise. Frameworks evolve, get replaced, fall out of favour, and eventually die. Business logic, when it is well modelled, does not have the same lifecycle. Keeping them genuinely separate — not just in theory, but in practice, through strict encapsulation — means the thing that actually matters, the domain model, accumulates value over time rather than accumulating debt.&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Pitfalls
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The data bag.&lt;/strong&gt; A class whose primary purpose is to hold data with getters and setters is not an object in any meaningful sense. It is a data structure. Logic that should belong to it lives elsewhere, and that scattered logic is the source of most maintenance pain in large Java codebases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The leaking constructor.&lt;/strong&gt; A constructor that accepts implementation details — storage strategies, injected resources, configurable mechanisms — is already exposing the "how." This is dependency injection, and despite its near-universal adoption in Java development, it is a direct violation of encapsulation. The object should own its implementation fully. If it needs to talk to an external resource, it does so internally. That is not a variable, not a configuration point, not something the outside world participates in. It is simply what the object does. The widespread embrace of DI as a default pattern reflects an aversion to singletons and a desire for testability — both legitimate concerns — but it solves them at the cost of encapsulation, and that cost is rarely acknowledged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Procedural code in disguise.&lt;/strong&gt; A service class that takes data out of one object, makes decisions about it, and puts results into another object is a procedural function with a class wrapper. The behavior belongs in the objects themselves. The service class is a symptom of objects that do not own their responsibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SOLID as a technical checklist.&lt;/strong&gt; When principles like Interface Segregation or Single Responsibility are applied to framework configuration and layer boundaries rather than to object design, they produce architectural cargo cult — the appearance of structure without the substance. These principles are about responsibilities and design, not about how to wire up a Spring context.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Note on Pragmatism
&lt;/h2&gt;

&lt;p&gt;No codebase exists in a vacuum. Frameworks, ORMs, and serialization libraries are part of real-world development, and they sometimes need to know things about your domain objects. This is accidental complexity — the overhead introduced by the tools and environment you work in, as opposed to the essential complexity of the domain itself.&lt;/p&gt;

&lt;p&gt;The key distinction is whether the accidental complexity adapts to the essential, or corrupts it.&lt;/p&gt;

&lt;p&gt;JPA annotations on a domain object are a good example of acceptable accidental complexity. They decorate the object — they tell the framework how to map it — but they do not change what the object does, how it reasons, or how it protects its own state. The domain logic is untouched. If you removed JPA tomorrow, the object would still make complete sense. The essential complexity is intact. Accidental complexity that adapts to essential complexity without reshaping it is always acceptable — and recognising that distinction is itself a design skill.&lt;/p&gt;

&lt;p&gt;The line is crossed when the framework starts dictating structure. A no-argument constructor that leaves the object in an invalid state. A setter that exists purely because the ORM needs to hydrate a field. A transaction boundary that forces business logic to be organised around framework sessions rather than domain responsibilities. At that point the accidental complexity is no longer adapting to the essential — it is reshaping it. The tool is now designing the domain, and the domain is losing its integrity.&lt;/p&gt;

&lt;p&gt;The test is simple: if the accidental complexity were removed, would the core object still be coherent, valid, and complete on its own terms? If yes, the compromise is acceptable. If no, the framework has gone too far and the design needs to push back.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion: Encapsulation as a Force Multiplier
&lt;/h2&gt;

&lt;p&gt;True encapsulation is strict. The object alone owns and hides everything about how it works. Clients see only responsibilities. The "how" is nobody else's business.&lt;/p&gt;

&lt;p&gt;Practiced properly, it changes more than the code. It changes how you think about systems. You stop modeling data flow and start modeling behavior. You stop thinking in layers and start thinking in responsibilities. You stop asking "where does this code go?" and start asking "whose job is this?" The software becomes a network of objects that each know their job and do it — completely, independently, and without leaking their secrets.&lt;/p&gt;

&lt;p&gt;That network survives in a way that layered, framework-dependent systems do not. It survives framework upgrades because the framework was never inside it. It survives architectural shifts because the objects carry no architectural assumptions. It survives time because it is organised around the domain — around what the software actually is — rather than around the technology that happens to be running it today.&lt;/p&gt;

&lt;p&gt;Most Java developers today have never worked in a codebase built this way. That is not an accusation — it is a consequence of an industry that taught frameworks before it taught design. But it means that for many, genuinely object-oriented development would feel like a different discipline entirely.&lt;/p&gt;

&lt;p&gt;It is. And it is worth learning.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>springboot</category>
      <category>encapsulation</category>
      <category>java</category>
    </item>
    <item>
      <title>Rich domain modelling: a library story</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Sun, 19 Apr 2026 12:23:04 +0000</pubDate>
      <link>https://dev.to/leonpennings/rich-domain-modelling-a-library-story-1ane</link>
      <guid>https://dev.to/leonpennings/rich-domain-modelling-a-library-story-1ane</guid>
      <description>&lt;p&gt;Most software doesn't have a domain model. It has a database schema, a set of service classes that orchestrate calls to it, and a collection of user stories that have been implemented one by one, each leaving a small deposit of logic somewhere convenient. This works, until it doesn't — until a framework needs replacing, a regulation changes, or someone asks a question the system was never quite designed to answer, and the answer turns out to be scattered across fourteen service methods and three database joins.&lt;/p&gt;

&lt;p&gt;This article is about a different approach, illustrated through a deliberately simple example: a library system. The example is old-fashioned on purpose. The familiarity lets you focus on the reasoning, not the subject matter.&lt;/p&gt;

&lt;p&gt;The core argument is this: a rich domain model is not something you design once at the start of a project and then implement. It is something you grow, continuously, as your understanding of the business deepens. Every requirement, every refinement session, every new user story is not just a work order — it is new information about the domain. The question to ask at each step is not "how do we implement this?" but "does this change what we understand the domain to be?"&lt;/p&gt;

&lt;p&gt;If the answer is yes, the model changes. Not in a future story. Not as tech debt. Now. The implementation timeline is not sacred. The correctness of the domain is. The cost of a misaligned domain compounds over time — it gets into every new feature, every workaround, every "we can't easily change that" conversation. A missed sprint to correct the model is almost always cheaper than six months of working around a wrong abstraction.&lt;/p&gt;

&lt;p&gt;The other side of this is: you only model what you understand. If something is unclear, that is not a reason to guess at an abstraction — it is a reason to ask more. Refinement sessions exist precisely for this. The domain expert knows things the model doesn't yet reflect. The job is to close that gap, incrementally, with each new piece of understanding.&lt;/p&gt;

&lt;p&gt;That is what this article shows. Not a perfect model arrived at in one go, but a model that starts where the knowledge starts, and adapts as the knowledge grows.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;User story 1: "We want to lend out books"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first conversation with the domain expert goes predictably. The library wants to lend books. They want to know where each book is — on which shelf, or on loan to whom, from when until when.&lt;/p&gt;

&lt;p&gt;From this, the initial domain objects emerge: &lt;code&gt;Book&lt;/code&gt;, &lt;code&gt;Lender&lt;/code&gt;, and somewhere, the loan dates. And this last point — &lt;em&gt;where&lt;/em&gt; do the loan dates live? — is the first real decision.&lt;/p&gt;

&lt;p&gt;The path of least resistance puts them in &lt;code&gt;Book&lt;/code&gt;. The book knows where it is; if it's on loan, it knows to whom and for how long. It seems natural. But pause here, because this is the decision that will constrain everything that follows.&lt;/p&gt;

&lt;p&gt;Ask a simple domain question: is knowing when it was borrowed, and by whom, part of what a book &lt;em&gt;is&lt;/em&gt;? A book is a title, an author, a physical object. The loan is an event — an agreement between the library and a person, at a point in time, concerning that book. Two different things. Putting loan dates in &lt;code&gt;Book&lt;/code&gt; is the same category of error as storing someone's employment history in their passport: adjacent subjects stitched together because it was convenient.&lt;/p&gt;

&lt;p&gt;There is also a practical problem that makes the conceptual one concrete: a book can be borrowed many times, by different people, at different points in time. A single set of loan fields cannot represent that history without overwriting it. The model isn't just conceptually imprecise — it is structurally incapable of answering basic questions the business will eventually ask.&lt;/p&gt;

&lt;p&gt;The first model, with its warning signs visible:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgktvmbpk06524mzt6ql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgktvmbpk06524mzt6ql.png" alt=" " width="667" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Recognising the problem, a &lt;code&gt;Loan&lt;/code&gt; entity is introduced. It points to a book and a lender, and carries its own data: start date, end date, and a return date for when the item actually comes back.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk66rjysr8bgcfixvyq8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk66rjysr8bgcfixvyq8c.png" alt=" " width="686" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Book&lt;/code&gt; is clean. Each entity is responsible for what it actually is.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Emergent behaviour: what the model now gives you for free&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is something worth making explicit, because it tends to get overlooked.&lt;/p&gt;

&lt;p&gt;When the domain is modelled correctly, it doesn't just solve the problem at hand — it makes available capabilities that nobody wrote a story for.&lt;/p&gt;

&lt;p&gt;With &lt;code&gt;Loan&lt;/code&gt; as a first-class entity, the model now contains the answers to questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How many times has this book been borrowed in the last year?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is it borrowed back-to-back — should we order a second copy?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which items are overdue right now?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which lender has the most active loans?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No one asked for any of this. And more importantly, no one needs to change the model to support it. These questions are answerable as a natural consequence of the right abstraction — zero additional structural cost. This is what correct domain modelling produces: not just a solution to the stated requirement, but a foundation that doesn't resist future questions.&lt;/p&gt;

&lt;p&gt;The opposite — loan dates buried in &lt;code&gt;Book&lt;/code&gt; — means that every one of those questions requires working around an accidental constraint. The data is there, technically, but it is in the wrong place conceptually, and that mismatch has a cost that accumulates with every new question the business wants to ask.&lt;/p&gt;

&lt;p&gt;A correct abstraction doesn't just solve the current problem. It shapes every solution that follows.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;User story 2: "We also want to lend out DVDs"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A new requirement arrives. The library wants to lend DVDs too.&lt;/p&gt;

&lt;p&gt;On most teams, this is treated as a work order. There is now a &lt;code&gt;DVD&lt;/code&gt; entity. Fields are defined — title, director, runtime. The ticket is closed.&lt;/p&gt;

&lt;p&gt;This is precisely the failure mode the introduction described: a user story implemented rather than understood. The arrival of this requirement is not an instruction to add &lt;code&gt;DVD&lt;/code&gt;. It is new information about the domain. And new information about the domain means it is time to re-examine the model.&lt;/p&gt;

&lt;p&gt;The question is not "how do we add DVD?" The question is: &lt;em&gt;was&lt;/em&gt; &lt;code&gt;Book&lt;/code&gt; &lt;em&gt;ever the right abstraction for this domain?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Think about what the lending system actually cares about. It doesn't care that a book has pages or that a DVD has a runtime. From the perspective of the lending domain, both are things that can be borrowed, returned, and tracked. If you add a &lt;code&gt;DVD&lt;/code&gt; entity you are not modelling the lending domain — you are modelling a classification detail that the domain does not act on. And the next story will bring magazines. Then tools. Then a request that breaks the pattern entirely, and by then there are four parallel entity types, duplicated service logic, and a reporting layer full of unions.&lt;/p&gt;

&lt;p&gt;The correct response to this user story is not implementation. It is evaluation. And the evaluation reveals that the concept the domain actually needs is not &lt;code&gt;Book&lt;/code&gt; — it is a lendable item. Something that can be borrowed, regardless of what it is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modelling the domain, not the world&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the point where a common objection appears: isn't &lt;code&gt;LendableItem&lt;/code&gt; with a generic attribute collection just an EAV pattern with a different name? Isn't it losing type safety? Isn't it too abstract?&lt;/p&gt;

&lt;p&gt;These are implementation concerns, not domain concerns. And that distinction matters enormously.&lt;/p&gt;

&lt;p&gt;A book and a DVD are genuinely different things in the real world. They have different physical forms, different metadata, different cultural contexts. But the domain model is not a model of the real world. It is a model of how the business operates. And in the lending domain, a book and a DVD are the same thing: an item that can be lent to a person for a period of time, tracked, and returned. The domain acts on that concept. It does not act on the distinction between pages and runtime.&lt;/p&gt;

&lt;p&gt;The risk in domain modelling is not abstraction. The risk is the &lt;em&gt;wrong&lt;/em&gt; abstraction — and the most common wrong abstraction is modelling the real world instead of the business domain. When that happens, the model fills up with concepts that feel correct because they match physical reality, but that the business never actually operates on as distinct things. &lt;code&gt;Book&lt;/code&gt; and &lt;code&gt;DVD&lt;/code&gt; as separate domain entities is that mistake. The library doesn't lend books and DVDs differently. It lends items.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;LendableItem&lt;/code&gt; is not generic for the sake of flexibility. It is precise — precisely what the domain requires.&lt;/p&gt;

&lt;p&gt;This is not overengineering. Starting with &lt;code&gt;Book&lt;/code&gt; was correct — at the time, only books existed, and naming the concept after the only known instance of it is entirely reasonable. Good domain modelling does not demand abstraction before there is evidence for it. But when the evidence arrives, the model must respond.&lt;/p&gt;

&lt;p&gt;The revised model:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1sfom6h8ovffenf71uv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1sfom6h8ovffenf71uv.png" alt=" " width="700" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Book&lt;/code&gt; becomes &lt;code&gt;LendableItem&lt;/code&gt;. The type — book, DVD, magazine, whatever comes next — is an &lt;code&gt;ItemType&lt;/code&gt; instance defined in data, not in code. Each &lt;code&gt;ItemType&lt;/code&gt; carries the attribute definitions relevant to it: a book has ISBN and author; a DVD has runtime and director. The &lt;code&gt;LendableItem&lt;/code&gt; holds the attribute values as a key-value collection shaped by the &lt;code&gt;ItemType&lt;/code&gt; — not arbitrary data, but controlled variation. A new lendable type can be defined through the UI, without a software release. The domain absorbs the variation without being touched.&lt;/p&gt;

&lt;p&gt;Notice what also appears here: &lt;code&gt;LendPolicy&lt;/code&gt;. Lending rules — how long something can be borrowed, whether it can be renewed — are not properties of items. They are policies, and policies have their own identity. A 7-day loan period might apply to all DVDs, a 21-day period to most books, and a specific rare edition might carry its own exception — all configurable, without code changes. By modelling &lt;code&gt;LendPolicy&lt;/code&gt; as an entity that &lt;em&gt;points to&lt;/em&gt; items rather than belonging to them, the granularity becomes a business decision. The domain reflects it correctly.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What this example is really about&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Three things are worth naming directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The domain is not a one-off.&lt;/strong&gt; The biggest misconception about domain modelling is that it happens at the start of a project, produces a diagram, and is then finished. In practice, a domain model is only as good as the understanding that produced it. Understanding grows — through refinement sessions, through new requirements, through conversations with domain experts who reveal nuance the model doesn't yet capture. Every one of those moments is an opportunity to improve the model. Treating them as implementation tickets instead is how misalignment accumulates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correctness compounds.&lt;/strong&gt; A wrong abstraction doesn't just cause one problem. It causes every problem that grows on top of it. When the framework needs replacing five years from now, the core business logic should be the stable thing — the part that doesn't change because it correctly reflects the domain. If the logic has leaked into service methods, database queries, and framework-specific glue, the framework and the logic are inseparable. A rich domain model is what makes the core of the application resilient to the things around it changing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User stories are input, not instructions.&lt;/strong&gt; "We want to lend DVDs" is not a specification. It is a piece of information about the business. The correct response is to understand what it reveals about the domain, and let that understanding reshape the model if necessary. On teams where user stories are treated purely as work orders, &lt;code&gt;DVD&lt;/code&gt; gets added, the ticket is closed, and the model silently drifts further from reality. On teams where user stories are treated as domain conversations, the arrival of &lt;code&gt;DVD&lt;/code&gt; prompts the question that leads to &lt;code&gt;LendableItem&lt;/code&gt; — and the system becomes more correct, not just more complete.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;A note on SOLID&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This article has used two principles from SOLID without naming them. It is worth naming them now — not to add jargon, but because these principles are widely known and almost as widely misunderstood, and the library example shows exactly what they were designed for.&lt;/p&gt;

&lt;p&gt;SOLID is a tool for domain modelling. Applied to technical layers — controllers, services, repositories, packages — it is the wrong tool for the job. Not because it produces nothing useful there, but because it is answering questions that belong to a different space. Asking whether your &lt;code&gt;BookService&lt;/code&gt; violates the Single Responsibility Principle is like applying flight-route optimisation to a city street map. You will get answers. They will be coherent. They will just not be answers to the right question. The right question is always about the domain.&lt;/p&gt;

&lt;p&gt;When SOLID is applied only at the technical layer, the domain model is typically left untouched — a set of anemic objects with no real behaviour — while all the interesting decisions accumulate in a service class that nobody can coherently describe the responsibility of. The system is, in a narrow sense, well-structured. It models nothing.&lt;/p&gt;

&lt;p&gt;The uncomfortable truth this produces is worth stating plainly: you can apply SOLID perfectly and still end up with a system that does not model the business. The principles do not tell you what to model. They evaluate whether what you have modelled makes sense. If what you have modelled is technical structure rather than domain concepts, SOLID will faithfully validate that structure — and the domain will remain a mess.&lt;/p&gt;

&lt;p&gt;Applied to the domain, the principles are genuinely illuminating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Responsibility Principle&lt;/strong&gt; is what drove the &lt;code&gt;Book&lt;/code&gt; → &lt;code&gt;Book&lt;/code&gt; + &lt;code&gt;Loan&lt;/code&gt; split. The question it asks is not "does this class do too many technical things?" It asks: does this concept carry responsibility that belongs to a different concept? A book is not responsible for knowing when it was borrowed. That is the responsibility of the loan event. One domain question, one correct answer, one new entity. Applied at the domain level, SRP produces clean, stable concepts with clear boundaries. Applied only at the technical level, it tends to produce &lt;code&gt;BookHelper&lt;/code&gt;, &lt;code&gt;BookManager&lt;/code&gt;, and &lt;code&gt;BookUtil&lt;/code&gt; — classes that exist to split code rather than to model anything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open/Closed Principle&lt;/strong&gt; is what drove the &lt;code&gt;Book&lt;/code&gt; + &lt;code&gt;DVD&lt;/code&gt; → &lt;code&gt;LendableItem&lt;/code&gt; + &lt;code&gt;ItemType&lt;/code&gt; move. The principle says a model should be open for extension but closed for modification. In domain terms: when new kinds of things appear, the model should absorb them without requiring existing concepts to change. A &lt;code&gt;DVD&lt;/code&gt; entity requires a code change and a deployment every time a new item type is introduced. &lt;code&gt;LendableItem&lt;/code&gt; with &lt;code&gt;ItemType&lt;/code&gt; instances defined in data requires neither — the model is extended through configuration. The domain is open for new item types and closed against needing to touch &lt;code&gt;LendableItem&lt;/code&gt; to accommodate them.&lt;/p&gt;

&lt;p&gt;The remaining principles have domain equivalents too. But the point here is not to survey all five — it is to show that SOLID belongs in the domain conversation. Bringing it into the technical conversation is not a sequencing problem — it is a category problem. The principles ask domain questions. Technical layers are not a domain. The questions do not apply. It's like applying makeup to a horse. It works but the results have no benefit.&lt;/p&gt;




&lt;p&gt;The model should always reflect the best current understanding of the domain. When that understanding changes, the model changes with it. Not later. Now.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>ddd</category>
      <category>java</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Software Engineering Is Living The Golden Hammer Antipattern — And Everyone Loves It</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 14 Apr 2026 05:25:25 +0000</pubDate>
      <link>https://dev.to/leonpennings/software-engineering-is-living-the-golden-hammer-antipattern-and-everyone-loves-it-3e83</link>
      <guid>https://dev.to/leonpennings/software-engineering-is-living-the-golden-hammer-antipattern-and-everyone-loves-it-3e83</guid>
      <description>&lt;p&gt;Why the industry simultaneously agrees with Brooks and ignores him — and why it's structured to stay that way&lt;/p&gt;

&lt;p&gt;The Paradox Nobody Talks About&lt;/p&gt;

&lt;p&gt;Ask any experienced software engineer about essential versus accidental complexity. They will nod. Ask them about Brooks' central argument in No Silver Bullet — that the hard part of software is the conceptual work of understanding the problem, not the mechanical work of expressing it in code. They will nod again.&lt;/p&gt;

&lt;p&gt;Then watch what happens when the next project starts.&lt;/p&gt;

&lt;p&gt;Someone opens Spring Initializr. Someone proposes microservices. Someone puts Kubernetes in the architecture diagram before a single domain concept has been named. The technology stack is decided in the first week. The business domain is still being understood in month six.&lt;/p&gt;

&lt;p&gt;Nobody in that room forgot Brooks. The choice was never really about Brooks.&lt;/p&gt;

&lt;p&gt;That is the paradox this essay is about. Not that the industry is ignorant of the problem — but that it is structured to reproduce it perfectly, indefinitely, at enormous and invisible cost.&lt;/p&gt;

&lt;p&gt;What Brooks Actually Said&lt;/p&gt;

&lt;p&gt;In 1975, Frederick Brooks published The Mythical Man-Month, based on his experience managing the development of OS/360 at IBM. The project was late, over budget, and initially didn't work particularly well. Brooks spent the rest of his career trying to understand why.&lt;/p&gt;

&lt;p&gt;The insight most people remember is the coordination problem. Adding people to a late software project makes it later. Nine women cannot make a baby in one month. Communication overhead scales quadratically. You cannot parallelise work that is fundamentally interdependent. Everyone knows this. It shows up in every post-mortem, every engineering blog, every conference talk about why the rewrite took three years instead of six months.&lt;/p&gt;

&lt;p&gt;What people remember less clearly is the deeper argument Brooks made in his 1986 essay No Silver Bullet, later added to the anniversary edition of the book.&lt;/p&gt;

&lt;p&gt;Brooks drew a distinction between two kinds of complexity in software. Essential complexity is inherent to the problem itself — the rules, the relationships, the invariants, the genuine difficulty of the business domain being modelled. Accidental complexity is everything else — the tools, the frameworks, the infrastructure, the deployment machinery, the coordination overhead introduced by the way we choose to build systems.&lt;/p&gt;

&lt;p&gt;His claim was precise and devastating: there is no silver bullet because the hard part of software is essential complexity, and no tool or methodology can compress it. You cannot automate your way out of needing to understand the problem. You cannot framework your way past the conceptual work.&lt;/p&gt;

&lt;p&gt;Then he said something that was either ignored or misunderstood: the industry's persistent belief that the next tool, the next methodology, the next architectural pattern will finally solve the problem of software difficulty is itself the symptom of failing to make this distinction.&lt;/p&gt;

&lt;p&gt;That was 1986. Since then the industry has produced structured programming, object orientation, UML, SOA, agile, microservices, event-driven architecture, CQRS, cloud-native development, and AI-assisted coding.&lt;/p&gt;

&lt;p&gt;Each one arrived as a silver bullet. Each one was greeted with the same enthusiasm. Each one was applied before the domain was understood.&lt;/p&gt;

&lt;p&gt;Brooks' own framework predicted every step of it&lt;/p&gt;

&lt;p&gt;The Golden Hammer The Industry Forgot To Question&lt;/p&gt;

&lt;p&gt;There is a well-known antipattern in software called the golden hammer. It describes the tendency to over-apply a familiar tool regardless of whether it fits the problem. Named after Maslow's observation that if all you have is a hammer, everything looks like a nail.&lt;/p&gt;

&lt;p&gt;The modern software industry does not have one golden hammer. It has a coordinated set of them — and they are chosen as a bundle, before the problem is understood, in almost every project that starts today.&lt;/p&gt;

&lt;p&gt;The bundle looks like this: a popular framework for the application layer, microservices for decomposition, an event-driven or REST-based communication model, a cloud platform for deployment, and Kubernetes for orchestration. The specific tools vary by organisation and year. The pattern does not vary.&lt;/p&gt;

&lt;p&gt;What makes this particular golden hammer different from the textbook antipattern is a crucial property: it is unfalsifiable.&lt;/p&gt;

&lt;p&gt;A normal golden hammer eventually gets retired. Something demonstrates it was the wrong tool — the screw still won't turn, the nail bent, the joint failed. There is a moment of visible failure that creates pressure to reconsider.&lt;/p&gt;

&lt;p&gt;The modern software stack has no such moment. If the system runs in production, the stack gets the credit. If the system struggles — if changes are expensive, if the team grows endlessly, if understanding the codebase requires months of archaeology — the blame goes to requirements changing, team turnover, business complexity, or simply the nature of software. The stack is never in the dock.&lt;/p&gt;

&lt;p&gt;This is not an accident. It is a structural property of how software success is defined. A system running in production passes the only test anyone applies. There is no test for whether it could have been built at a fraction of the cost with a fraction of the complexity. Nobody built that version. Nobody ever does.&lt;/p&gt;

&lt;p&gt;The golden hammer persists not because people are lazy or ignorant — but because the thing that should replace it is invisible to every organisational instrument the industry has built.&lt;/p&gt;

&lt;p&gt;Agile Was The Correction. Then It Was Captured.&lt;/p&gt;

&lt;p&gt;In 2001, the Agile Manifesto proposed something that was, underneath its somewhat vague language, a precise epistemological claim.&lt;/p&gt;

&lt;p&gt;Software development is fundamentally a process of learning. You do not fully understand the domain at the start. You build a version of your understanding, expose it to reality — specifically to the domain experts who live in that business every day — and you refine it. Each iteration is not primarily a delivery mechanism. It is a question: did we understand the domain correctly?&lt;/p&gt;

&lt;p&gt;The working software at the end of a sprint is not the point. It is the test. The test of whether your conceptual model of the business — your understanding of what the domain actually is, what rules govern it, what concepts belong together — corresponds to reality. Domain experts are not approving features. They are stress-testing your model.&lt;/p&gt;

&lt;p&gt;That is what Agile was. A mechanism for continuously refining essential understanding through structured contact with reality.&lt;/p&gt;

&lt;p&gt;That is not what Agile became.&lt;/p&gt;

&lt;p&gt;What Agile became was a process for efficiently transcribing user stories into framework components. Two-week sprints. Velocity points. Definition of done. Backlog refinement. The ceremonies survived. The epistemology was quietly discarded.&lt;/p&gt;

&lt;p&gt;And then CI/CD completed the transformation.&lt;/p&gt;

&lt;p&gt;Continuous integration and continuous deployment are genuinely valuable practices for managing the operational complexity of releasing software. But they introduced a subtle and devastating redefinition of what "production ready" means.&lt;/p&gt;

&lt;p&gt;Before, production readiness was at least nominally connected to domain correctness — does this system correctly implement the business? After, production readiness means the pipeline is green. Tests pass. Build succeeds. Deploy proceeds.&lt;/p&gt;

&lt;p&gt;These are not the same question. A passing test suite validates that the code does what the code was written to do. It says nothing about whether the code was written to do the right thing. Whether the domain concepts are correctly identified. Whether the invariants are correctly enforced. Whether the model reflects the business reality or merely the user story that described one interaction with it.&lt;/p&gt;

&lt;p&gt;You can have one hundred percent test coverage and zero domain correctness. The pipeline will be green. The system will go to production. The retrospective will be positive.&lt;/p&gt;

&lt;p&gt;The feedback loop Agile promised — between domain experts and the conceptual model being built — was replaced by a feedback loop between the code and its own tests. We optimised the loop while removing the thing it was supposed to validate.&lt;/p&gt;

&lt;p&gt;The Sociological Lock-In&lt;/p&gt;

&lt;p&gt;So far this looks like an intellectual failure. Engineers and organisations that know better making choices they shouldn't. A problem of discipline or culture that better education might eventually correct.&lt;/p&gt;

&lt;p&gt;It is not. It is structural. And the structure actively selects against correction.&lt;/p&gt;

&lt;p&gt;Consider how a software project begins. Before a single domain conversation happens, several things must occur. The project must be staffed. That requires a job posting. A job posting requires a technology stack. The project must be estimated. Estimation requires a known architecture. The kickoff deck must be prepared. The kickoff deck needs something in the architecture diagram.&lt;/p&gt;

&lt;p&gt;All of these organisational necessities demand a technology decision at the precise moment when the only intellectually honest answer is: we don't know yet. We haven't understood the domain.&lt;/p&gt;

&lt;p&gt;That answer is organisationally impossible to give. So the stack gets chosen. Not out of ignorance. Not out of laziness. Out of genuine organisational necessity. The machinery of project initiation requires it.&lt;/p&gt;

&lt;p&gt;And once the stack is chosen, it shapes everything that follows. The hiring criteria. The team composition. The onboarding process. The architecture decisions. The decomposition strategy. The system that emerges is not primarily a model of the business domain. It is primarily an expression of the technology choices made before the domain was understood.&lt;/p&gt;

&lt;p&gt;This is not the worst part.&lt;/p&gt;

&lt;p&gt;The worst part is what happens at the hiring stage.&lt;/p&gt;

&lt;p&gt;Conceptual thinking — the ability to reason about what a business concept actually is, what it should own, what it should never be responsible for, where the real boundaries lie — is extremely difficult to assess in an interview. It requires time, domain context, and a level of conversation that most hiring processes cannot accommodate. It does not show up cleanly on a CV.&lt;/p&gt;

&lt;p&gt;Tool fluency shows up immediately. Spring Boot, Kubernetes, Kafka, event-driven architecture — these are expressible, searchable, assessable. You can screen for them in thirty seconds. You can test them in a one-hour technical interview. You can verify them with a take-home assignment.&lt;/p&gt;

&lt;p&gt;So organisations hire for tool fluency. Not because they don't value conceptual thinking. Because tool fluency is what their hiring process can see.&lt;/p&gt;

&lt;p&gt;The consequence is a team that reaches for the familiar tools. The team ships systems using those tools. Those systems run in production. The hiring criteria get validated. The loop closes.&lt;/p&gt;

&lt;p&gt;Engineers who push back on premature technology decisions get filtered out at the CV screen, outvoted in the kickoff meeting, or labelled as impractical idealists who don't understand how real projects work. The selection pressure is quiet, consistent, and almost entirely invisible.&lt;/p&gt;

&lt;p&gt;When everyone hired thinks the same way, the golden hammer stops looking like a hammer. It looks like engineering.&lt;/p&gt;

&lt;p&gt;The Cost Nobody Can See&lt;/p&gt;

&lt;p&gt;Here is the claim that cannot be proven and cannot be dismissed.&lt;/p&gt;

&lt;p&gt;A system built with a full modern distributed stack — framework, microservices, cloud infrastructure, orchestration — could in many cases have been built far more simply, maintained by a fraction of the team, and been more correct, more stable, and more responsive to business change.&lt;/p&gt;

&lt;p&gt;That statement cannot be verified. Because the simpler version was never built. Nobody built it. The team that chose the distributed architecture never built the alternative to compare against. The organisation that approved the budget never saw a competing proposal. The engineers who maintained the system never worked on a well-modelled equivalent.&lt;/p&gt;

&lt;p&gt;This is not a gap in the data. It is the mechanism of the problem.&lt;/p&gt;

&lt;p&gt;Brooks identified it precisely: most systems are built only once. There is no second system built with different assumptions, run for five years, and compared on total cost of ownership, ease of change, and conceptual correctness. The counterfactual does not exist. Therefore the cost of the wrong choice is permanently invisible.&lt;/p&gt;

&lt;p&gt;And here is what makes it truly unfalsifiable: the entire industry is paying the same inflated price. There is no reference point. When every team uses the same stack, incurs the same coordination overhead, grows to the same size, and struggles with the same maintenance costs — those costs stop being visible as costs. They become the definition of what software costs. Normal and wasteful become indistinguishable.&lt;/p&gt;

&lt;p&gt;But the difference is not just in cost. It is in what the work actually consists of every single day.&lt;/p&gt;

&lt;p&gt;In a team organised around accidental complexity, the daily work is about the technology. Configuring services. Connecting components. Managing framework upgrades. Fixing pipeline failures. Debugging integration issues. Updating dependencies. Understanding the codebase means knowing which service owns which endpoint and how the data flows between them. The business domain is somewhere in there, translated into controllers and DTOs and event schemas, but it is not what the day is about.&lt;/p&gt;

&lt;p&gt;In a team organised around essential complexity, the daily work is about the domain. Which concept owns this responsibility. What this rule actually means. What the domain expert said yesterday that changed how they understand the model. The implementation follows from that understanding — and because the model is clear, the implementation is the smaller part of the day, not the larger.&lt;/p&gt;

&lt;p&gt;The difference is visible — immediately and without any instrumentation — in the daily standup.&lt;/p&gt;

&lt;p&gt;In one team, the language is technical. Spring, Kafka, the pipeline, the service, the endpoint, the migration. Progress is reported in terms of tickets and story completion. The word "business" appears occasionally, usually in the phrase "business requirement."&lt;/p&gt;

&lt;p&gt;In the other team, the language is conceptual. The Order, the Invoice, the Payment, what a Shipment is responsible for, whether a Client and a User are really the same thing. Technology appears occasionally, usually briefly, because the implementation of a well-understood concept is rarely the hard part.&lt;/p&gt;

&lt;p&gt;You do not need metrics or cost analyses to know which team is working on the right problems. You need one standup.&lt;/p&gt;

&lt;p&gt;If every item on the standup is about accidental complexity — go back. Ask what the essential complexity actually demands. Then and only then choose the technology that serves it.&lt;/p&gt;

&lt;p&gt;If every garage in the world were built to the standard of a luxury hotel, nobody would know a garage could cost less. The price would simply be what it is. The inflated standard would be the only standard anyone had ever seen.&lt;/p&gt;

&lt;p&gt;That is where the software industry is today. Paying Burj Al Arab prices for a garage that needed to store a jar of paint. And maintaining a universal, genuine, unforced consensus that this is simply what garages cost.&lt;/p&gt;

&lt;p&gt;Two Rules That Cost Nothing&lt;/p&gt;

&lt;p&gt;Most prescriptions for this problem are expensive. Hire differently. Retrain your engineers. Adopt a new methodology. Bring in consultants. Run workshops.&lt;/p&gt;

&lt;p&gt;These are not wrong. But they require budget, time, and organisational will that most teams do not have in the moment a project starts.&lt;/p&gt;

&lt;p&gt;There are two rules that cost nothing, require no external help, and can be applied starting tomorrow.&lt;/p&gt;

&lt;p&gt;Do not choose technology upfront.&lt;/p&gt;

&lt;p&gt;Technology enters the project when the domain demands it, not when the kickoff deck needs an architecture diagram. The first weeks of a project produce domain understanding — what the business actually is, what concepts exist in it, what rules govern them. Technology choices follow from that understanding, added only when essential complexity makes them necessary, and only to the degree that it does.&lt;/p&gt;

&lt;p&gt;This feels impossible in most organisations. The job posting needs a stack. The estimate needs an architecture. The kickoff slide needs something in the boxes.&lt;/p&gt;

&lt;p&gt;Those are real constraints. They are also exactly the organisational machinery that inverts Brooks before the first line of code is written. Recognising that the machinery is the problem is the first step toward not letting it make the decision by default.&lt;/p&gt;

&lt;p&gt;Mandate that standups should be about business concepts only. Never technology.&lt;/p&gt;

&lt;p&gt;This is the litmus test made into a practice. If someone says "I'm working on the Kafka consumer," the immediate question is: what business concept does that serve, and does that business concept actually require it? If the answer is unclear, the technology choice is premature. If the answer is clear, state the business concept first and let the technology be the footnote it should be.&lt;/p&gt;

&lt;p&gt;A standup where every item is about services, frameworks, pipelines, and endpoints is a standup where the team has been captured by accidental complexity. It will feel entirely normal. It will sound like engineering. The terminology will be confident and precise.&lt;/p&gt;

&lt;p&gt;But the business domain — the essential complexity that justifies the system's existence — will be invisible. And a team that cannot talk about the business in its daily standup is a team that is not working on the business. It is working on the technology that was supposed to serve it.&lt;/p&gt;

&lt;p&gt;These two rules do not solve the problem entirely. The sociological pressures remain. The hiring pipelines remain. The organisational machinery remains. But they create two moments — one at the start of a project, one every single day — where the inversion becomes visible. Where someone can point at the standup and say: we have not mentioned a business concept in three days. What are we actually building?&lt;/p&gt;

&lt;p&gt;That question, asked consistently, is more powerful than any methodology.&lt;/p&gt;

&lt;p&gt;Closing&lt;/p&gt;

&lt;p&gt;The most expensive software is the software everyone agrees is fine.&lt;/p&gt;

&lt;p&gt;It runs in production. The pipeline is green. The team is stable. The architecture is recognisable. The job postings write themselves. The onboarding takes three months instead of three days, but that is just how software works. The changes take longer than they should, but the domain is complex. The team keeps growing, but the system keeps growing too. The costs keep rising, but software is expensive.&lt;/p&gt;

&lt;p&gt;None of this is inevitable. All of it is a consequence of a single inversion: accidental complexity chosen before essential complexity is understood. A choice made not out of ignorance, but out of organisational necessity, sociological pressure, and the permanent invisibility of the alternative.&lt;/p&gt;

&lt;p&gt;Brooks saw it in 1975. Named it clearly. Watched the industry quote him extensively and change nothing.&lt;/p&gt;

&lt;p&gt;The golden hammer is not a mistake. It is the product. The template is not a shortcut. It is the destination. The assembly is not the means. It has become the craft.&lt;/p&gt;

&lt;p&gt;Two rules. No technology upfront. Standups about the business only.&lt;/p&gt;

&lt;p&gt;They will feel radical. They are just Brooks, applied.&lt;/p&gt;

&lt;p&gt;Everyone agrees with Brooks.&lt;/p&gt;

&lt;p&gt;Then the next project starts.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>architecture</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Fast Onboarding of Software Engineers: The Two Learning Modes</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Fri, 10 Apr 2026 11:43:42 +0000</pubDate>
      <link>https://dev.to/leonpennings/fast-onboarding-of-software-engineers-the-two-learning-modes-52ge</link>
      <guid>https://dev.to/leonpennings/fast-onboarding-of-software-engineers-the-two-learning-modes-52ge</guid>
      <description>&lt;p&gt;There is a persistent belief in software organizations that standardizing on a single framework — Spring Boot being the popular example — makes developers interchangeable across teams. If every system is built the same way, engineers can move between projects with minimal friction.&lt;/p&gt;

&lt;p&gt;It sounds efficient. It feels scalable. It is also largely wrong — and understanding why reveals something important about how developers actually learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two Ways to Learn a Codebase
&lt;/h2&gt;

&lt;p&gt;There are fundamentally two modes through which a developer can come to understand a system.&lt;/p&gt;

&lt;p&gt;The first is &lt;strong&gt;comprehension-based learning&lt;/strong&gt;. The developer is walked through the core domain concepts — typically in a whiteboard session — and can then trace those exact concepts in the code. The system is legible. Understanding precedes execution.&lt;/p&gt;

&lt;p&gt;The second is &lt;strong&gt;execution-based learning&lt;/strong&gt;. The developer runs the system, breaks it, watches it, traces calls through layers. Understanding is assembled gradually from observed behavior. This is the default mode for procedural and layered architectures.&lt;/p&gt;

&lt;p&gt;The practical consequence of this difference is not marginal. Comprehension-based onboarding can bring a developer to effective contribution within &lt;strong&gt;hours to days&lt;/strong&gt;. Execution-based onboarding routinely takes &lt;strong&gt;weeks to months&lt;/strong&gt; before a developer can contribute without close supervision.&lt;/p&gt;

&lt;p&gt;That gap is not a matter of individual ability. It is a structural property of the codebase.&lt;/p&gt;

&lt;p&gt;Pair programming, shadowing, and extensive debugging sessions are not learning strategies in this context. They are compensations — workarounds for the absence of anything readable at the conceptual level. Organizations that rely on them have simply normalized the cost of an illegible system.&lt;/p&gt;

&lt;p&gt;Framework standardization does nothing to change this. Recognizing a controller is not the same as understanding why the endpoint exists, what constraints govern it, or what invariants must never be broken. That knowledge lives in the domain — and in most codebases, it lives nowhere at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Comprehension-Based Onboarding Actually Looks Like
&lt;/h2&gt;

&lt;p&gt;In a well-crafted domain model, onboarding follows a different rhythm entirely.&lt;/p&gt;

&lt;p&gt;A developer new to the system joins a whiteboard session — 30 to 60 minutes — where the core domain concepts are walked through. They then open the codebase and can trace those exact concepts in the code. Within an hour, the relationships between concepts — what governs what, what depends on what, what is allowed and what is forbidden — form a coherent picture. By the end of the first day, they can participate meaningfully in design discussions, and in many cases begin implementing new functionality.&lt;/p&gt;

&lt;p&gt;This is not aspirational. It is the direct consequence of a system that makes its intent legible.&lt;/p&gt;

&lt;p&gt;The critical insight is this: new functionality must be grounded in what a system &lt;em&gt;does&lt;/em&gt;, not in how it is written. A developer who understands the domain can reason about where new behavior belongs, what rules it must respect, and how it connects to existing concepts — without having traced a single execution path. That is what makes hours-to-days contribution possible. Without it, the developer has no foundation to build from, and execution-based exploration begins — with all the time cost that entails.&lt;/p&gt;

&lt;h2&gt;
  
  
  First Principles, Not Seniority
&lt;/h2&gt;

&lt;p&gt;This is not about experience level. A junior developer who thinks from first principles — who reasons about what a system &lt;em&gt;is&lt;/em&gt; and what it must never do before asking how it runs — will orient just as quickly as a senior. First-principles thinking is a mode, not a career stage. It is the ability to think and talk in concepts and responsibilities, to ask the right questions before reaching for the debugger.&lt;/p&gt;

&lt;p&gt;Execution-based systems actively disadvantage this kind of thinking. There is nothing to reason from. The only available strategy is empirical — run it, break it, observe. That favors pattern recognition over understanding, and accumulated exposure over insight. It rewards engineers who are good at navigating complexity rather than those who are good at resolving it.&lt;/p&gt;

&lt;p&gt;Over time this has consequences beyond onboarding. The system comes to be understood only by those who have been exposed to it long enough — and institutional knowledge becomes a function of tenure rather than clarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Most Systems Make This Impossible
&lt;/h2&gt;

&lt;p&gt;The root causes of slow onboarding are almost never the framework. They are structural.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implicit domain knowledge.&lt;/strong&gt; Critical business rules are undocumented and embedded in conditionals, naming conventions, and historical decisions nobody questions anymore. New engineers are forced into archaeology before they can contribute. Every answer is buried somewhere in the execution history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fragmented business logic.&lt;/strong&gt; When behavior is spread across controllers, services, and repositories, there is no single place to understand what the system enforces. Every answer requires assembling fragments from multiple layers — which means execution-based exploration is the only path available, regardless of how familiar the framework feels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow-centric design.&lt;/strong&gt; Systems modeled around flows — requests, events, pipelines — force developers to reconstruct intent from execution paths. The &lt;em&gt;what&lt;/em&gt; is buried inside the &lt;em&gt;how&lt;/em&gt;. Reading the code tells you what happens; it does not tell you why, or what must never happen.&lt;/p&gt;

&lt;p&gt;These are not framework problems. A Spring Boot application can have a rich domain model. It rarely does, because framework-driven thinking optimizes for &lt;em&gt;how we build&lt;/em&gt; rather than &lt;em&gt;what we model&lt;/em&gt; — and that trade-off silently pushes onboarding from days into months.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Domain Model as a Table of Context
&lt;/h2&gt;

&lt;p&gt;A well-structured domain model acts as a compression mechanism for complexity. Core concepts are named clearly. Invariants are enforced in one place. Relationships are explicit.&lt;/p&gt;

&lt;p&gt;This gives the codebase something more useful than a table of contents. It provides a &lt;strong&gt;table of context&lt;/strong&gt;: each concept is not just listed but anchored in meaning and relationship. A new developer does not navigate files — they navigate intent. And navigating intent is something a first-principles thinker can do quickly, regardless of how many years they have been writing code.&lt;/p&gt;

&lt;p&gt;For this to work, the code must speak the language of the business. If stakeholders say &lt;em&gt;DocumentRequest&lt;/em&gt;, the code should not say &lt;em&gt;PayloadDTO&lt;/em&gt;. When the language of the domain is reflected faithfully in the implementation, onboarding becomes a translation exercise rather than a decoding one. Translation is fast. Decoding is slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Useful Side Effect: Simpler Code
&lt;/h2&gt;

&lt;p&gt;Systems with rich domain models tend to produce surprisingly simple implementations. This is not accidental.&lt;/p&gt;

&lt;p&gt;When complexity is resolved at the conceptual level — when the model clearly captures what is allowed, what is forbidden, and where behavior belongs — it does not accumulate elsewhere. There is less need for elaborate orchestration, framework configuration, or infrastructure glue.&lt;/p&gt;

&lt;p&gt;In contrast, systems that lack a strong domain model push complexity into the gaps: coordination logic spreads across components, edge cases get patched rather than modeled, and understanding the system requires tracing runtime behavior rather than reading domain logic. Infrastructure complexity grows not because it is necessary but because the domain complexity has nowhere else to go. This is precisely what makes execution-based onboarding so expensive — the system keeps revealing new layers of implicit complexity the longer you look.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of Frameworks, Properly Understood
&lt;/h2&gt;

&lt;p&gt;Frameworks are not without value in onboarding. They reduce setup friction, provide familiar scaffolding, and standardize infrastructure concerns. A developer who knows Spring Boot will navigate a Spring Boot project faster than a complete stranger would.&lt;/p&gt;

&lt;p&gt;But this is surface-layer familiarity. It accelerates the first few hours. It does not touch the weeks that follow.&lt;/p&gt;

&lt;p&gt;Onboarding has two layers:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Surface layer&lt;/strong&gt; — framework, build tools, deployment setup, API conventions. Fast to learn, low in durable value. Framework standardization helps here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deep layer&lt;/strong&gt; — domain concepts, object responsibilities, business rules, architectural boundaries. This is where the weeks-to-months cost lives. Framework standardization does nothing here.&lt;/p&gt;

&lt;p&gt;Most organizations optimize the surface layer because it is visible and measurable. They neglect the deep layer, absorb the onboarding cost as a fact of life, and attribute slow ramp-up to individual developers rather than to the structure of their systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The question of onboarding speed is ultimately a question of legibility.&lt;/p&gt;

&lt;p&gt;A codebase with a well-crafted domain model is legible. A single whiteboard session on the core concepts is enough to orient a developer — because when they open the codebase, those exact concepts are right there, named and structured as explained. The session and the code reinforce each other. From that foundation, a first-principles thinker — junior or senior — can form a complete picture and begin making meaningful contributions within hours to days.&lt;/p&gt;

&lt;p&gt;A codebase without one is an execution environment. You learn it by running it, breaking it, and asking the person who wrote it. That process takes weeks. Often months. And it repeats itself every time a new engineer joins.&lt;/p&gt;

&lt;p&gt;If you want engineers to move between projects effectively, do not standardize the tools. The tools are not the barrier.&lt;/p&gt;

&lt;p&gt;Standardize the clarity of the domain. Make systems understandable rather than developers interchangeable.&lt;/p&gt;

&lt;p&gt;That is the real multiplier.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>programming</category>
      <category>architecture</category>
    </item>
    <item>
      <title>When Distribution Becomes a Substitute for Design — and Fails</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:41:51 +0000</pubDate>
      <link>https://dev.to/leonpennings/when-distribution-becomes-a-substitute-for-design-and-fails-5gec</link>
      <guid>https://dev.to/leonpennings/when-distribution-becomes-a-substitute-for-design-and-fails-5gec</guid>
      <description>&lt;p&gt;A lot of modern software architecture—microservices, event-driven systems, CQRS—is not born from deeply understanding the domain. It is what teams reach for when the existing application has become a mess: nobody really knows what’s happening where anymore, behavior is unpredictable, and making changes feels risky and expensive. Instead of asking “What does this concept actually mean and where does it truly belong?”, they ask “How do we split this?”&lt;/p&gt;

&lt;p&gt;That is where a lot of modern architecture begins.&lt;br&gt;&lt;br&gt;
Not in necessity.&lt;br&gt;&lt;br&gt;
Not in insight.&lt;br&gt;&lt;br&gt;
But in the growing discomfort of trying to manage software that was never modeled well in the first place.&lt;/p&gt;

&lt;p&gt;And because the resulting system still runs in production, the cost of that move often remains invisible for years.&lt;/p&gt;

&lt;p&gt;That is one of the most expensive traps in software.&lt;/p&gt;




&lt;h2&gt;
  
  
  Framework Fluency Is Not Software Design
&lt;/h2&gt;

&lt;p&gt;A lot of developers today are highly fluent in frameworks.&lt;br&gt;&lt;br&gt;
They know how to build controllers, services, repositories, DTOs, entities, integrations, and configuration.&lt;/p&gt;

&lt;p&gt;From the outside, that often looks like competence.&lt;/p&gt;

&lt;p&gt;But that kind of fluency can be deeply misleading.&lt;/p&gt;

&lt;p&gt;Because building software out of familiar framework-shaped parts is not the same thing as designing software well.&lt;/p&gt;

&lt;p&gt;The real questions are different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is the actual business concept here?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What belongs together?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What behavior is intrinsic to the domain?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is a real boundary, and what is just an implementation detail?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What rules should be explicit in the model rather than implied by orchestration?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Real domain modeling is not about applying a catalog of patterns. It is the disciplined, often uncomfortable work of discovering what belongs together, what behavior is intrinsic, and expressing those concepts as clearly and cohesively as possible—whether that lives in modules, functions, or simple objects. The goal is conceptual integrity, not architectural ceremony.&lt;/p&gt;

&lt;p&gt;Without those questions, software tends to take on a very predictable shape: fat service classes, anemic entities, persistence-first design, procedural workflows, business logic smeared across layers.&lt;/p&gt;

&lt;p&gt;The code works. The endpoints return data. The database persists state.&lt;/p&gt;

&lt;p&gt;But the system has not really been designed.&lt;br&gt;&lt;br&gt;
It has been assembled.&lt;/p&gt;

&lt;p&gt;And that difference matters far more than most teams realize.&lt;/p&gt;




&lt;h2&gt;
  
  
  Weak Models Create Cognitive Overload
&lt;/h2&gt;

&lt;p&gt;The cost of poor design does not usually show up immediately. At first, the system still feels manageable. A few controllers. A few services. A few repositories. Everything is still “clean.”&lt;/p&gt;

&lt;p&gt;But over time, something starts to happen. Business rules accumulate. Exceptions pile up. New requirements interact with old assumptions. Concepts that looked simple turn out to be related in ways the software never captured.&lt;/p&gt;

&lt;p&gt;And because there is no strong domain model holding those concepts together, the complexity has nowhere coherent to go. So it leaks—into service methods, orchestration flows, integration glue, persistence logic, special-case conditionals, “helper” abstractions, and coordination code.&lt;/p&gt;

&lt;p&gt;At that point, the team starts feeling something very real:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Nobody understands the whole thing anymore.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that is the crucial moment.&lt;/p&gt;

&lt;p&gt;Because once a system becomes cognitively overwhelming, the team has two options:&lt;/p&gt;

&lt;h3&gt;
  
  
  Option A
&lt;/h3&gt;

&lt;p&gt;Reduce the complexity by improving the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option B
&lt;/h3&gt;

&lt;p&gt;Reduce the &lt;em&gt;scope&lt;/em&gt; of the confusion by splitting it apart.&lt;/p&gt;

&lt;p&gt;A lot of teams choose Option B.&lt;/p&gt;




&lt;h2&gt;
  
  
  Distribution Becomes Compensation
&lt;/h2&gt;

&lt;p&gt;This is where architecture often stops being a design choice and starts becoming a coping mechanism.&lt;/p&gt;

&lt;p&gt;When the internal model is weak, teams still need some way to create order. And distribution gives them one.&lt;/p&gt;

&lt;p&gt;So they introduce microservices, event-driven architecture, CQRS, separate read models, ownership boundaries, queues, and asynchronous coordination.&lt;/p&gt;

&lt;p&gt;Distribution, CQRS, and event-driven architecture can have legitimate uses in rare cases of extreme scale or unavoidable organizational boundaries. But in the vast majority of systems, they are not introduced because the domain demands them. They are introduced because the internal model is too weak to provide clarity. What looks like sophisticated architecture is often just confusion hiding behind cleaner service boundaries.&lt;/p&gt;

&lt;p&gt;What they are really doing is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;They are trying to create externally, through distribution, the boundaries they failed to create internally, through design.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that can work. At least for a while.&lt;/p&gt;

&lt;p&gt;A smaller service &lt;em&gt;does&lt;/em&gt; feel easier to understand than a large monolith. A separate read model &lt;em&gt;does&lt;/em&gt; reduce some friction. A queue &lt;em&gt;does&lt;/em&gt; create some local decoupling.&lt;/p&gt;

&lt;p&gt;But none of that means the software has become conceptually better. It often just means the confusion has been sliced into smaller containers.&lt;/p&gt;




&lt;h2&gt;
  
  
  Local Clarity Comes at a Global Cost
&lt;/h2&gt;

&lt;p&gt;That trade is where the real damage happens.&lt;/p&gt;

&lt;p&gt;Because distribution absolutely can create local context. A team can say, “This service owns billing.” And that does help.&lt;/p&gt;

&lt;p&gt;But it is a much weaker form of clarity than a real domain model. A service boundary can tell you &lt;strong&gt;where code lives&lt;/strong&gt;. A good model can tell you what something &lt;em&gt;is&lt;/em&gt;, what it &lt;em&gt;means&lt;/em&gt;, what rules govern it, what its lifecycle is, and what relationships are essential.&lt;/p&gt;

&lt;p&gt;Those are very different levels of understanding.&lt;/p&gt;

&lt;p&gt;And when teams use distribution to manufacture context, they often gain short-term manageability at the cost of long-term agility. Because now the system starts paying the distribution tax: network failure, eventual consistency, contract drift, duplicated concepts, duplicated logic, coordination overhead, deployment complexity, operational burden, and fractured causality.&lt;/p&gt;

&lt;p&gt;And perhaps most importantly: &lt;strong&gt;lost refactorability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When the model is strong and cohesive, changing your mind usually means a local refactor—sometimes even a delightful collapse of concepts. When boundaries have been hardened into services, the same insight triggers contracts, versioning, migration scripts, and cross-team coordination. The cost of learning is no longer paid in thought, but in infrastructure and politics.&lt;/p&gt;

&lt;p&gt;And in software, changing your mind is not a failure. It is the job.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Cost Is Paid When the Business Learns Something New
&lt;/h2&gt;

&lt;p&gt;This is where badly structured software reveals itself. Not when it is first deployed. Not when the first endpoints work. Not when the dashboards are green. But when the business itself becomes better understood.&lt;/p&gt;

&lt;p&gt;Because that is what always happens. Sooner or later, the business learns: these two concepts are actually one thing, this workflow was modeled incorrectly, this rule has important exceptions, this distinction is more important than we thought, or this process should not exist at all.&lt;/p&gt;

&lt;p&gt;That is normal. That is what software is supposed to accommodate.&lt;/p&gt;

&lt;p&gt;A coherent domain model makes that kind of change survivable. A fragmented, distributed, weakly modeled system makes it expensive.&lt;/p&gt;

&lt;p&gt;Note that “coherent domain model” here does not mean the tactical patterns that became associated with DDD—entities, repositories, aggregates, and the rest. Those often added their own accidental complexity. Real modeling is simpler and deeper: it is the ongoing work of refining ubiquitous language and discovering natural conceptual boundaries so that new business insight can be absorbed with minimal violence to the existing code.&lt;/p&gt;

&lt;p&gt;Because now the insight has to travel through APIs, queues, read models, event contracts, deployment boundaries, ownership lines, duplicated rules, and partial consistency guarantees. What should have been a conceptual refactor becomes a cross-system negotiation.&lt;/p&gt;

&lt;p&gt;And that is where the bill arrives. Not because the domain was inherently impossible. But because the architecture froze yesterday’s misunderstandings into today’s structure.&lt;/p&gt;

&lt;p&gt;That is one of the worst things software can do.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This So Often Goes Unnoticed
&lt;/h2&gt;

&lt;p&gt;The most dangerous part is that this kind of architecture often looks successful. The system runs. Users use it. The company makes money. So the architecture gets treated as validated.&lt;/p&gt;

&lt;p&gt;But “it works” is one of the weakest standards in software. A system running in production proves only that it is viable enough to survive. It does &lt;strong&gt;not&lt;/strong&gt; prove that it is cheap to change, conceptually sound, structurally coherent, or good at absorbing new understanding.&lt;/p&gt;

&lt;p&gt;Most teams never get to experience how different software feels when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Concepts have a single, obvious home instead of being smeared across services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rules are explicit and enforceable rather than scattered in orchestration and glue code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;New business understanding leads to a clean refactor instead of distributed coordination&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The system invites insight instead of resisting change&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without that contrast, the pain of weak modeling hidden behind distribution gets normalized as “just how complex software is.”&lt;/p&gt;

&lt;p&gt;Often, it is not. Often, it is just the cost of weak design hidden behind architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;Much of today’s distributed architecture is not the result of domain insight. It is compensation for the conceptual clarity that was never built into the model. By reaching for separation instead of deeper understanding, teams gain local manageability at the expense of long-term coherence and cheap evolution.&lt;/p&gt;

&lt;p&gt;The problem is that the original lack of clarity doesn’t disappear — it just gets distributed. In the end, the same confusion that made the monolith unmaintainable will make the distributed system fail just as hard, only now it’s far more expensive and painful to fix.&lt;/p&gt;

&lt;p&gt;This is why so much “sophisticated” architecture is, in truth, just sophisticated coping.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>microservices</category>
      <category>cqrs</category>
    </item>
    <item>
      <title>Rich Domain Models: Start with What Is, Not What Happens</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Sat, 04 Apr 2026 10:18:12 +0000</pubDate>
      <link>https://dev.to/leonpennings/rich-domain-models-start-with-what-is-not-what-happens-4817</link>
      <guid>https://dev.to/leonpennings/rich-domain-models-start-with-what-is-not-what-happens-4817</guid>
      <description>&lt;p&gt;A lot of software is more difficult to build and maintain than it needs to be.&lt;/p&gt;

&lt;p&gt;Not because the business itself is inherently complex.&lt;/p&gt;

&lt;p&gt;Not because the requirements keep changing.&lt;/p&gt;

&lt;p&gt;But because the software is usually structured around the wrong things: workflows, events, commands, technical layers, frameworks, or current implementation details.&lt;/p&gt;

&lt;p&gt;When that happens, the business logic becomes scattered, hard to reason about, and expensive to evolve. The fix is not more patterns, more ceremonies, or more events. The fix is proper domain modelling.&lt;/p&gt;

&lt;p&gt;A rich domain model is built by first identifying the core concepts of the business and giving each one clear responsibilities and boundaries. Once that foundation is in place, everything else—events, workflows, persistence, integrations—becomes simpler and more stable.&lt;/p&gt;

&lt;p&gt;This is not a new technique or a branded method. It is basic systems engineering done in the right order.&lt;/p&gt;




&lt;h3&gt;
  
  
  The purpose of domain modelling
&lt;/h3&gt;

&lt;p&gt;Domain modelling is about discovering &lt;em&gt;what exists&lt;/em&gt; in the business, independent of how we happen to implement it today.&lt;/p&gt;

&lt;p&gt;It means answering a small set of fundamental questions for every important concept:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What &lt;em&gt;is&lt;/em&gt; this thing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is it responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What may it know?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What may it decide?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What belongs inside its boundary, and what does not?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These questions come before any talk of events, commands, database tables, API payloads, or user flows. If those questions have not been asked and answered, domain modelling has not actually started. At best, we are only mapping interactions.&lt;/p&gt;




&lt;h3&gt;
  
  
  Start with responsibilities, not representation
&lt;/h3&gt;

&lt;p&gt;The most common mistake is beginning with &lt;em&gt;representation&lt;/em&gt; instead of responsibility.&lt;/p&gt;

&lt;p&gt;Teams start listing fields, DTOs, JSON shapes, database columns, or REST endpoints. Those are not the model; they are merely one possible way to represent the model. When you start there, you almost always end up with passive data structures and procedural logic spread across services, handlers, and utility classes.&lt;/p&gt;

&lt;p&gt;A rich domain model begins the other way around. The first questions are never “What properties does this object have?” or “What does the request body look like?” They are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is this thing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What does it &lt;em&gt;do&lt;/em&gt;?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is it responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What should it &lt;em&gt;never&lt;/em&gt; be responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Structure and representation emerge naturally once responsibilities are clear.&lt;/p&gt;




&lt;h3&gt;
  
  
  A simple way to begin
&lt;/h3&gt;

&lt;p&gt;You do not need extensive workshops, coloured sticky notes, or elaborate frameworks.&lt;/p&gt;

&lt;p&gt;The most effective technique is almost embarrassingly simple: put people in a circle of chairs. Tell one person, “You are the Order. What are you? What do you know? What are you responsible for? What should you never do?” Then add the next concept—Client, Invoice, Payment—and let them talk to each other. Let them negotiate boundaries. When something feels wrong, revise the definitions or pull up a new chair for a missing concept.&lt;/p&gt;

&lt;p&gt;The medium does not matter—cards, people, puppets, or just conversation. What matters is that you can point at a concept and force it to declare its own identity and responsibilities. When two concepts constantly need to know each other’s internals, the boundaries are probably wrong. When no one knows who should decide something, the responsibility has not been assigned yet. When a concept only exists because a UI flow needed it, it may not be a real domain concept at all.&lt;/p&gt;

&lt;p&gt;This is domain discovery. It starts with “What are we &lt;em&gt;about&lt;/em&gt;?” and then “Who does what?”—not in the sense of users or actors, but in the sense of the actual participants in the business reality: Client, Order, Invoice, Payment, Subscription, Shipment, Notification.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why starting with events or workflows feels backwards
&lt;/h3&gt;

&lt;p&gt;Many popular modelling techniques (Event Storming being the most visible) begin with domain events, commands, actors, and processes. They are excellent at mapping &lt;em&gt;what happens&lt;/em&gt; and at surfacing integration points. But they are weak at discovering &lt;em&gt;what is&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;They describe motion around the business rather than the business itself. A process map can tell you that a payment failed. It cannot tell you what a Payment &lt;em&gt;is&lt;/em&gt;, what responsibilities it owns, or whether Invoice resolution belongs to the Invoice, the Order, or a separate Payment concept. It cannot distinguish a Client (the legal/commercial entity) from a User (merely a web-access mechanism for that Client).&lt;/p&gt;

&lt;p&gt;Those are modelling questions, and they must come first. Events and workflows are valuable &lt;em&gt;after&lt;/em&gt; the core model exists; they should not be the starting point. Otherwise the domain becomes limited to today’s usage patterns instead of reflecting the stable underlying reality.&lt;/p&gt;




&lt;h3&gt;
  
  
  Aggregates and the danger of procedural models
&lt;/h3&gt;

&lt;p&gt;The concept of “Aggregate” is often presented as a necessary consistency boundary. In practice it frequently becomes a procedural container: a cluster of data that a command mutates and an event is emitted from. When responsibilities have not been properly assigned, those aggregates turn into little more than transaction scripts with a fancy name.&lt;/p&gt;

&lt;p&gt;In a rich model the question is simpler: does this concept have a coherent responsibility? If it does, it owns its invariants and decisions. If it does not, no artificial boundary will save it. Objects can collaborate, but they do not need to be artificially clustered just to satisfy technical consistency rules.&lt;/p&gt;




&lt;h3&gt;
  
  
  Rich domain models make the core simple
&lt;/h3&gt;

&lt;p&gt;A well-defined domain model does not add complexity; it removes accidental complexity.&lt;/p&gt;

&lt;p&gt;Consider a typical payment flow. An Order contains Items. An Invoice points to an Order. An Invoice can be resolved by a Payment. A Payment has a type (Online, BankTransfer, etc.). That type determines how execution actually happens.&lt;/p&gt;

&lt;p&gt;In a responsibility-driven model this is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Invoice knows it needs to be resolved.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Payment knows it must execute according to its type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The type itself (implemented as an enum with a strategy or small implementing classes) encapsulates the variability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Adding a new payment mechanism tomorrow is a local change inside the Payment concept. No new workshop, no new event storm, no ripple through services. The core model stays stable; only the variable part grows.&lt;/p&gt;

&lt;p&gt;Complexity lives exactly where the variability is—not scattered across workflows, services, or “process managers.”&lt;/p&gt;




&lt;h3&gt;
  
  
  Keep the domain central; push technology to the border
&lt;/h3&gt;

&lt;p&gt;The real architectural decision is not whether a domain object may call a database or invoke an external service. The question is: does this action belong to the responsibility of this concept?&lt;/p&gt;

&lt;p&gt;If the answer is yes, the call can live inside the domain object. Technology is not the organising principle. The business meaning is.&lt;/p&gt;

&lt;p&gt;When you organise around technology layers instead (controllers, services, repositories, adapters), the business becomes invisible. Every change requires archaeological digging. When you organise around the domain, the business stays transparent and technology becomes replaceable.&lt;/p&gt;




&lt;h3&gt;
  
  
  Outcomes — short term and long term
&lt;/h3&gt;

&lt;p&gt;A domain model built this way delivers measurable improvements from the very first delivery and compounds dramatically over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short term:&lt;/strong&gt; Time to first production is usually &lt;em&gt;shorter&lt;/em&gt;, not longer. With a rich domain model you know the destination clearly from the start, so you can take the direct route. It is the difference between driving from The Hague to Utrecht on the A12 motorway versus taking the long detour via Amsterdam and the Afsluitdijk. Both paths eventually get you there, but the workflow-first approach feels like continuously driving “somewhat in the right direction” while you figure things out on the fly. By modelling what the business is, you learn faster, decide faster, write less boilerplate, and avoid the lengthy refactoring cycles that come from discovering the business domain later in the project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long term:&lt;/strong&gt; The difference becomes stark — especially in non-CRUD domains such as complex ETL pipelines, logistics orchestration, risk engines, or any system with real business rules and variability.&lt;/p&gt;

&lt;p&gt;The “framework-first” or “workflow-first” approach can appear to work for a while. You can wire together services, handlers, and event processors and ship something functional. But as soon as the business evolves — new payment types, new regulatory rules, new integration partners, or changed data flows — the system turns into a web of scattered logic. Maintenance becomes slow, error-prone, and expensive. Changes ripple unpredictably because the business is no longer visible in one coherent place.&lt;/p&gt;

&lt;p&gt;In contrast, a rich domain model keeps the stable business reality in the centre. Change stays local. Payment providers, ETL transformations, or logistics carriers can be swapped without touching the core model. Fewer classes, fewer hand-offs, and far less rediscovery work are required. The result is software that is significantly cheaper to keep alive over its lifetime — often by a large margin.&lt;/p&gt;

&lt;p&gt;The economic benefit is real, but it is not the goal. It is the natural outcome of doing the engineering work correctly: modelling the domain first, responsibilities first, structure first.&lt;/p&gt;




&lt;h3&gt;
  
  
  On AI and domain modelling
&lt;/h3&gt;

&lt;p&gt;Modern AI tools are already excellent at helping with the &lt;em&gt;implementation&lt;/em&gt; phase. They can generate clean code snippets, suggest conventions, enforce patterns, and accelerate boilerplate work once the model is clear.&lt;/p&gt;

&lt;p&gt;But they have no meaningful role in the actual domain modelling itself.&lt;/p&gt;

&lt;p&gt;AI cannot sit in the circle of chairs. It cannot negotiate what a concept &lt;em&gt;is&lt;/em&gt;, what it should know, or what it should never be responsible for. It can mimic patterns it has seen in other codebases, but it lacks the lived understanding of business reality and the ability to discover stable invariants through dialogue.&lt;/p&gt;

&lt;p&gt;Writing the code remains the best mirror for your design. As soon as you start implementing, flaws in the model become visible immediately — that feedback loop is irreplaceable and deeply human. AI can polish and speed up the coding, but it should not be the one discovering or deciding the model. That work still belongs to the people who understand the domain.&lt;/p&gt;




&lt;h3&gt;
  
  
  Final thought
&lt;/h3&gt;

&lt;p&gt;Basic domain modelling is not complicated. It is simply insisting on answering the most fundamental questions first:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What is this thing?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is it responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What belongs inside it?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What should remain outside it?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When those questions are answered clearly, the business becomes visible in the code. Once the business is visible, the system becomes maintainable — from day one and for years to come.&lt;/p&gt;

&lt;p&gt;That is not a luxury. For any software expected to live longer than its current tech stack, it is the foundation.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>java</category>
      <category>richdomainmodels</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Your software development approach is too expensive and too brittle</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Thu, 02 Apr 2026 08:14:22 +0000</pubDate>
      <link>https://dev.to/leonpennings/your-software-development-approach-is-too-expensive-and-too-brittle-4fja</link>
      <guid>https://dev.to/leonpennings/your-software-development-approach-is-too-expensive-and-too-brittle-4fja</guid>
      <description>&lt;p&gt;Most software teams are not struggling because software is inherently chaotic.&lt;/p&gt;

&lt;p&gt;They are struggling because they are paying enormous amounts of money to keep the wrong machine barely usable.&lt;/p&gt;

&lt;p&gt;That sounds dramatic.&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;In fact, it is one of the most normal things in modern software development.&lt;/p&gt;

&lt;p&gt;A lot of systems are built in ways that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;more expensive than they need to be,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more fragile than they need to be,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;harder to change than they need to be,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and harder to reason about than they need to be.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And yet they still get called “well architected.”&lt;/p&gt;

&lt;p&gt;Why?&lt;/p&gt;

&lt;p&gt;Because in software, there is usually no comparison case.&lt;/p&gt;

&lt;p&gt;No control group.&lt;/p&gt;

&lt;p&gt;No alternate implementation.&lt;/p&gt;

&lt;p&gt;No tractor parked next to the Ferrari.&lt;/p&gt;

&lt;p&gt;So if the thing eventually works, the architecture often gets promoted from merely functional to supposedly good.&lt;/p&gt;

&lt;p&gt;That is one of the deepest blind spots in software engineering.&lt;/p&gt;

&lt;p&gt;And it is how teams end up trying to plow fields with a Ferrari F40.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Ferrari and the tractor&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Imagine you need to plow a field.&lt;/p&gt;

&lt;p&gt;You can choose between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a Ferrari F40, or&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a tractor.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This should not be a difficult decision.&lt;/p&gt;

&lt;p&gt;The tractor is not glamorous, but it is aligned to the work.&lt;/p&gt;

&lt;p&gt;It has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the right ground clearance,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right tires,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right torque profile,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right durability characteristics,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the right maintenance expectations,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and the right operational shape.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Ferrari has none of that.&lt;/p&gt;

&lt;p&gt;It is a remarkable machine.&lt;/p&gt;

&lt;p&gt;It is just the wrong one.&lt;/p&gt;

&lt;p&gt;And the mismatch does not merely show up once the work starts.&lt;/p&gt;

&lt;p&gt;It shows up immediately.&lt;/p&gt;

&lt;p&gt;Because before the Ferrari can even begin to perform badly in the field, someone first has to solve a completely absurd problem:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;How do we even make this thing usable for field work?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is where the real cost begins.&lt;/p&gt;

&lt;p&gt;Because now you need compensations.&lt;/p&gt;

&lt;p&gt;You need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;custom adaptations,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;support structures,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;protective workarounds,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;non-native operational handling,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;specialist maintenance,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and constant care to keep the machine functioning in an environment it was never shaped for.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the real problem with a mismatch.&lt;/p&gt;

&lt;p&gt;Not just that it performs badly.&lt;/p&gt;

&lt;p&gt;But that you now have to build an entire support ecosystem around the fact that it is wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;And even that is a cheap mismatch compared to software&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the physical world, the mismatch would at least be visible.&lt;/p&gt;

&lt;p&gt;A Ferrari F40 is obviously a terrible agricultural investment.&lt;/p&gt;

&lt;p&gt;Even with rough but realistic assumptions, the economics are absurd.&lt;/p&gt;

&lt;p&gt;In the physical world, the absurdity would be obvious on a balance sheet. A collector Ferrari F40 trades for millions, while a capable farm tractor costs a fraction of that — with maintenance profiles to match. Using the supercar for field work would not just perform poorly; it would demand absurd custom adaptations before it could even start.&lt;/p&gt;

&lt;p&gt;Software hides this mismatch better, which is why teams can run the equivalent for years and still call it maturity.&lt;/p&gt;

&lt;p&gt;So yes: in the real world, using a Ferrari to plow a field would already be economically insane.&lt;/p&gt;

&lt;p&gt;But in software, the mismatch is often much worse.&lt;/p&gt;

&lt;p&gt;Because in software:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the cost is less visible,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the pain is spread over time,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the friction is normalized,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and the organization often has no simpler implementation to compare it to.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means software teams can spend years operating the equivalent of a Ferrari in a muddy field and still call it “engineering maturity.”&lt;/p&gt;

&lt;p&gt;That is the danger.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The uniqueness trap&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is one of the hardest structural problems in software development:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;most applications are built only once.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not once in terms of business purpose, perhaps.&lt;/p&gt;

&lt;p&gt;But once in terms of implementation.&lt;/p&gt;

&lt;p&gt;A team typically does not build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;one version with a cohesive domain model,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;another with CQRS and event choreography,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;another with five microservices,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and then compare cost, reliability, comprehensibility, and adaptability over five years.&lt;/p&gt;

&lt;p&gt;That almost never happens.&lt;/p&gt;

&lt;p&gt;So architecture is rarely judged comparatively.&lt;/p&gt;

&lt;p&gt;It is judged internally.&lt;/p&gt;

&lt;p&gt;And that means if a system eventually “works,” people often conclude that the architecture must have been reasonable.&lt;/p&gt;

&lt;p&gt;But that conclusion is deeply unreliable.&lt;/p&gt;

&lt;p&gt;Because there may have been a far cheaper, simpler, more robust, and more truthful way to build the same thing.&lt;/p&gt;

&lt;p&gt;No one knows.&lt;/p&gt;

&lt;p&gt;Because the tractor version was never built.&lt;/p&gt;

&lt;p&gt;That is the uniqueness trap.&lt;/p&gt;

&lt;p&gt;And it is one of the main reasons accidental complexity survives so easily in software.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Most software architecture is expensive support structure around a mismatch&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the Ferrari metaphor becomes useful.&lt;/p&gt;

&lt;p&gt;If someone insisted on plowing a field with an F40, they would not simply “start plowing.”&lt;/p&gt;

&lt;p&gt;They would first need to invent a whole support system around the mismatch.&lt;/p&gt;

&lt;p&gt;They would need to answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How do we prevent the chassis from bottoming out?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we maintain traction in mud?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we protect components from wear profiles they were never designed for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we attach the wrong machine to the wrong task?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How do we keep it alive under repeated misuse?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;they would need to build a compensating architecture around the fact that the machine is wrong.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is exactly what many software teams do.&lt;/p&gt;

&lt;p&gt;They choose an architectural shape before they understand the domain, and then spend years building support mechanisms around the mismatch.&lt;/p&gt;

&lt;p&gt;That support structure often looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CQRS,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EDA,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestration layers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;distributed workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;microservices,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;command buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;retries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensations,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;synchronization logic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;observability scaffolding,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;deployment choreography,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and framework conventions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And because all of this is technical work, it often feels sophisticated.&lt;/p&gt;

&lt;p&gt;But much of it exists only because the software was shaped incorrectly to begin with.&lt;/p&gt;

&lt;p&gt;That is the setup tax of accidental complexity.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Back to Brooks: essential versus accidental complexity&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Fred Brooks gave us the cleanest possible vocabulary for this problem decades ago.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Essential complexity&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Essential complexity is the irreducible complexity of the business domain itself.&lt;/p&gt;

&lt;p&gt;This is the complexity that actually belongs.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;pricing rules,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;eligibility constraints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;shipment state transitions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reconciliation logic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;metadata rules,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;legal behavior,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;catalog semantics,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scheduling constraints.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This complexity exists because reality is complex.&lt;/p&gt;

&lt;p&gt;You cannot remove it.&lt;/p&gt;

&lt;p&gt;You can only understand it, model it, and localize it properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Accidental complexity&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Accidental complexity is everything introduced by the solution that the problem itself did not require.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;framework conventions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;architectural ceremony,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;messaging choreography,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;unnecessary distribution,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;layered indirection,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;technical orchestration,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensating workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;integration-driven domain shape,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“enterprise” abstraction stacks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This complexity is not business truth.&lt;/p&gt;

&lt;p&gt;It is construction overhead.&lt;/p&gt;

&lt;p&gt;And much of modern software architecture is simply accidental complexity with better branding.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The first job of software design is not to choose an architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is to understand the domain.&lt;/p&gt;

&lt;p&gt;That should not be controversial.&lt;/p&gt;

&lt;p&gt;And yet much of modern software development behaves as if the opposite were true.&lt;/p&gt;

&lt;p&gt;Teams routinely begin with questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Should we use CQRS?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we use EDA?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we split this into microservices?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should this be event-driven?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we separate reads and writes?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should this be asynchronous?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Should we introduce orchestration?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not first questions.&lt;/p&gt;

&lt;p&gt;Those are late questions.&lt;/p&gt;

&lt;p&gt;The first question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What is the business, really?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Until that question is answered properly, every major architectural choice is at risk of being premature.&lt;/p&gt;

&lt;p&gt;And premature architecture is usually just accidental complexity entering the system early enough to become permanent.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The real problem is Pattern-Driven Design&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The issue is not that CQRS, EDA, or messaging can never appear in a system.&lt;/p&gt;

&lt;p&gt;The issue is that many teams no longer design from the domain outward.&lt;/p&gt;

&lt;p&gt;They design from patterns inward.&lt;/p&gt;

&lt;p&gt;That is how software ends up shaped by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;command handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestration layers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service templates,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and framework conventions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;before anyone has actually understood what the business is.&lt;/p&gt;

&lt;p&gt;That is not architecture.&lt;/p&gt;

&lt;p&gt;That is &lt;strong&gt;Pattern-Driven Design&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And Pattern-Driven Design is one of the fastest ways to bury essential complexity under accidental complexity.&lt;/p&gt;

&lt;p&gt;Because once the pattern becomes the starting point, the business no longer gets modeled on its own terms.&lt;/p&gt;

&lt;p&gt;It gets forced to fit the machinery.&lt;/p&gt;

&lt;p&gt;That is not simplification.&lt;/p&gt;

&lt;p&gt;That is distortion.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Always start with the domain model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If the goal is to avoid expensive, brittle, overcompensated systems, then the starting point is straightforward:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Always start with the domain model.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not because every system needs an elaborate object hierarchy.&lt;/p&gt;

&lt;p&gt;Not because “DDD” is fashionable.&lt;/p&gt;

&lt;p&gt;Not because object orientation is sacred.&lt;/p&gt;

&lt;p&gt;But because if you do not start there, something else will define the shape of the software instead.&lt;/p&gt;

&lt;p&gt;And that “something else” is usually accidental.&lt;/p&gt;

&lt;p&gt;If you do not begin with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;what the business concepts are,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;what they mean,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;what they are responsible for,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;what must always be true,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how they are allowed to change,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and how they interact,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the system will instead be shaped by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;endpoints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;persistence structure,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;framework constraints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service boundaries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;message flows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;handler conventions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or transport semantics.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And once that happens, the business is no longer being modeled.&lt;/p&gt;

&lt;p&gt;It is being adapted to the machinery.&lt;/p&gt;

&lt;p&gt;That is where software becomes expensive and brittle.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A user story is not a model&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is one of the most common and costly confusions in software teams.&lt;/p&gt;

&lt;p&gt;A user story is not a model.&lt;/p&gt;

&lt;p&gt;A ticket is not a model.&lt;/p&gt;

&lt;p&gt;A process diagram is not a model.&lt;/p&gt;

&lt;p&gt;A request from the business is not yet the business.&lt;/p&gt;

&lt;p&gt;These things describe surface behavior.&lt;/p&gt;

&lt;p&gt;They do not necessarily describe the actual structure or semantics of the domain.&lt;/p&gt;

&lt;p&gt;That means implementation should never start by merely wiring the request into the chosen architecture.&lt;/p&gt;

&lt;p&gt;It should start by asking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What actually exists here?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What is this concept responsible for?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which rules belong together?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which state transitions are valid?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which interactions are intrinsic?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which behaviors are essential and which are incidental?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the real work of software design.&lt;/p&gt;

&lt;p&gt;And the clearest place to do that work is the domain model.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A rich domain model is not overengineering&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where a lot of modern teams have become confused.&lt;/p&gt;

&lt;p&gt;There is a recurring assumption that a rich domain model is somehow “too much.”&lt;/p&gt;

&lt;p&gt;But in practice, what often happens is not that the logic disappears.&lt;/p&gt;

&lt;p&gt;It simply moves elsewhere.&lt;/p&gt;

&lt;p&gt;If the business logic is not in the model, it will end up in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;services,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestrators,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;subscribers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;validators,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pipelines,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;process managers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or framework glue.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not simplification.&lt;/p&gt;

&lt;p&gt;That is displacement.&lt;/p&gt;

&lt;p&gt;A rich domain model is not about making software “academic.”&lt;/p&gt;

&lt;p&gt;It is about ensuring that the unavoidable business complexity lives where it is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;explicit,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cohesive,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;inspectable,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and semantically meaningful.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;the model should contain the business.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not the framework.&lt;/p&gt;

&lt;p&gt;Not the message bus.&lt;/p&gt;

&lt;p&gt;Not the choreography.&lt;/p&gt;

&lt;p&gt;Not the deployment topology.&lt;/p&gt;

&lt;p&gt;The business.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;If the domain is simple, the model will be simple&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the usual objection appears:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“But not every system needs a rich domain model.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Correct.&lt;/p&gt;

&lt;p&gt;But that does not weaken the argument at all.&lt;/p&gt;

&lt;p&gt;Because the real point is not that every system needs a complex model.&lt;/p&gt;

&lt;p&gt;The point is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;every system should begin by discovering whether the domain is simple or complex.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And the correct place to do that is still the model.&lt;/p&gt;

&lt;p&gt;If the domain turns out to be simple, then good.&lt;/p&gt;

&lt;p&gt;The model will simply remain small and quiet.&lt;/p&gt;

&lt;p&gt;That is not failure.&lt;/p&gt;

&lt;p&gt;That is successful discovery of simplicity.&lt;/p&gt;

&lt;p&gt;But deciding not to start there is a mistake.&lt;/p&gt;

&lt;p&gt;Because then simplicity is not being discovered.&lt;/p&gt;

&lt;p&gt;It is being assumed.&lt;/p&gt;

&lt;p&gt;And assumed simplicity is one of the easiest ways accidental complexity gets invited in.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;CQRS and EDA are often compensations for unclear modeling&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here is the part many people will resist.&lt;/p&gt;

&lt;p&gt;That is fine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CQRS and EDA are very often workarounds for bad design or not knowing how to model.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That does not mean they can never appear.&lt;/p&gt;

&lt;p&gt;It means they should almost never appear as &lt;strong&gt;up-front architectural choices&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That distinction matters enormously.&lt;/p&gt;

&lt;p&gt;They can absolutely emerge later as observations in retrospect.&lt;/p&gt;

&lt;p&gt;But they should not be adopted as predefined frameworks before the domain has been understood.&lt;/p&gt;

&lt;p&gt;Because once that happens, the architecture is no longer responding to the domain.&lt;/p&gt;

&lt;p&gt;The domain is being forced into the architecture.&lt;/p&gt;

&lt;p&gt;That is backwards.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;CQRS is usually an observation, not a design starting point&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Properly understood, CQRS is not something you “do.”&lt;/p&gt;

&lt;p&gt;It is simply the recognition that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;the model used to change business state is not always the same model best suited for retrieving and navigating information.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is all.&lt;/p&gt;

&lt;p&gt;And sometimes that is perfectly valid.&lt;/p&gt;

&lt;p&gt;A search engine like Lucene is a very good example.&lt;/p&gt;

&lt;p&gt;The write side may simply persist documents or structured domain state.&lt;/p&gt;

&lt;p&gt;The read side may support:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;indexing,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;tokenization,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ranking,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;full-text search,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;query optimization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not the same concern.&lt;/p&gt;

&lt;p&gt;That is a natural asymmetry.&lt;/p&gt;

&lt;p&gt;That is CQRS as an observation.&lt;/p&gt;

&lt;p&gt;But that is very different from deciding on day one that the architecture will have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;command handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;query handlers,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;buses,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;mediators,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;folders,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pipelines,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and all the associated ceremony.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not domain modeling.&lt;/p&gt;

&lt;p&gt;That is accidental complexity pretending to be rigor.&lt;/p&gt;

&lt;p&gt;Most CQRS implementations are just &lt;strong&gt;CRUD with bureaucracy&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;EDA is often the same mistake, but with more latency&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Event-driven architecture is often sold as if it were inherently sophisticated.&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;Very often, it is simply a sign that direct responsibility was not modeled clearly enough.&lt;/p&gt;

&lt;p&gt;There is a major difference between:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;recognizing a domain fact,&lt;br&gt;&lt;br&gt;
and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;externalizing causality into a distributed system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not the same thing.&lt;/p&gt;

&lt;p&gt;A domain event can be a useful modeling concept.&lt;/p&gt;

&lt;p&gt;But when every business consequence gets turned into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a message,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a subscriber,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a consumer,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a queue,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a retry policy,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a dead-letter topic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a compensating process,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then what often happened is not decoupling.&lt;/p&gt;

&lt;p&gt;What happened is that one coherent business act was split into multiple technical acts — and the system now needs operational rituals to pretend they are still one thing.&lt;/p&gt;

&lt;p&gt;That is not elegance.&lt;/p&gt;

&lt;p&gt;That is fragmentation.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;If an event is required for correctness, it belongs in the same transaction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where a lot of “event-driven” thinking falls apart.&lt;/p&gt;

&lt;p&gt;If an event represents something the business considers part of the same completed action, then it should not be externalized into eventual consistency theater.&lt;/p&gt;

&lt;p&gt;It should be processed within the same transactional consistency boundary.&lt;/p&gt;

&lt;p&gt;Often that means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;same model,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;same process,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;same database transaction,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;same JDBC transaction.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because if correctness depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;retries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cleanup,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensating actions,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dead-letter queues,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;reconciliation jobs,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or support scripts,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the architecture has usually split apart something the business still considers one coherent act.&lt;/p&gt;

&lt;p&gt;That is not decoupling.&lt;/p&gt;

&lt;p&gt;That is a modeling failure disguised as scalability.&lt;/p&gt;

&lt;p&gt;The simple rule is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;If the business says these things are one thing, the software should not split them into many things.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Only effects that are genuinely:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;external,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;observational,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;optional,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;or secondary&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;should be allowed to escape the core transactional boundary asynchronously.&lt;/p&gt;

&lt;p&gt;Everything else belongs together.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Microservices are often bad design with Kubernetes&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;And yes, the same critique applies to microservices.&lt;/p&gt;

&lt;p&gt;Microservices are one of the most overprescribed and underjustified architectural choices in modern software.&lt;/p&gt;

&lt;p&gt;They are usually discussed in terms of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;scaling,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;team autonomy,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;resilience,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;independent deployment,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ownership.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But that framing hides the actual cost.&lt;/p&gt;

&lt;p&gt;Because microservices are not just a deployment decision.&lt;/p&gt;

&lt;p&gt;They are a fragmentation decision.&lt;/p&gt;

&lt;p&gt;They force teams to commit to distributed boundaries early — often before anyone has proven those boundaries are semantically real.&lt;/p&gt;

&lt;p&gt;And once the split is made, the business has to pretend those boundaries are natural.&lt;/p&gt;

&lt;p&gt;That is how teams end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;cross-service workflows,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;distributed invariants,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;duplicated concepts,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;compensating logic,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service orchestration,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and “eventual consistency” as a lifestyle.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not architecture.&lt;/p&gt;

&lt;p&gt;That is often just what happens when one cohesive domain gets cut into pieces because “small services” sounded modern.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Logical cohesion comes before physical scale&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the usual counterargument appears:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Yes, but what about scale?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Fair question.&lt;/p&gt;

&lt;p&gt;But scale does not rescue bad boundaries.&lt;/p&gt;

&lt;p&gt;It amplifies them.&lt;/p&gt;

&lt;p&gt;If you cannot model a business capability coherently in one process, you are very unlikely to improve it by scattering it across twenty.&lt;/p&gt;

&lt;p&gt;That is because &lt;strong&gt;logical cohesion is a prerequisite for physical distribution&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A coherent system can sometimes be split later if reality genuinely demands it.&lt;/p&gt;

&lt;p&gt;An incoherent system does not become better by being distributed.&lt;/p&gt;

&lt;p&gt;It just becomes harder to debug, harder to reason about, and more expensive to keep alive.&lt;/p&gt;

&lt;p&gt;So yes, scale matters.&lt;/p&gt;

&lt;p&gt;But scale is not an excuse to abandon cohesion before you have even found it.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Small is not the goal. Cohesion is.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The phrase “microservice” already biases the conversation in the wrong direction.&lt;/p&gt;

&lt;p&gt;Because it encourages optimization for smallness.&lt;/p&gt;

&lt;p&gt;But smallness is not the goal.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Cohesion is the goal.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The real objective is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;semantically meaningful boundaries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;high internal density of behavior,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;low cross-boundary coordination.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is very different.&lt;/p&gt;

&lt;p&gt;If one business action routinely requires orchestration across multiple internal services, the split is probably wrong.&lt;/p&gt;

&lt;p&gt;That is one of the best architectural tests there is.&lt;/p&gt;

&lt;p&gt;Because if the business still experiences something as one coherent operation, but the software requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;service A,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then service B,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then service C,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;then retries and compensations if one fails,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then the architecture has not discovered a boundary.&lt;/p&gt;

&lt;p&gt;It has manufactured one.&lt;/p&gt;

&lt;p&gt;And now it has to manage the damage.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The real cost of framework-first architecture is not implementation. It is drag.&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where the economics become severe.&lt;/p&gt;

&lt;p&gt;Bad architecture is not expensive merely because it takes slightly longer to build.&lt;/p&gt;

&lt;p&gt;It is expensive because it creates organizational drag for years.&lt;/p&gt;

&lt;p&gt;That drag shows up everywhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Slower feature development&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Every change now has to move through machinery that was introduced before the business was properly understood.&lt;/p&gt;

&lt;p&gt;So even small changes require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;coordination,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;contract changes,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;handler updates,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;event flow changes,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;service touchpoints,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;deployment sequencing,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;orchestration review.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not domain complexity.&lt;/p&gt;

&lt;p&gt;That is architecture tax.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;More defects and harder recovery&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;When one coherent business action has been fragmented across:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;services,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;queues,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;projections,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;retries,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and compensations,&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;then failure handling becomes vastly more expensive.&lt;/p&gt;

&lt;p&gt;The question is no longer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Did the business rule execute correctly?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It becomes:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Which part of the distributed choreography failed, and what state is the system now in?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is a much more expensive problem to solve.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Permanent cognitive overhead&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is one of the biggest hidden costs in software.&lt;/p&gt;

&lt;p&gt;A misaligned architecture forces every engineer to carry extra mental load just to understand the system.&lt;/p&gt;

&lt;p&gt;Instead of reasoning directly about the business, they must first reason about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the framework,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the orchestration model,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the service topology,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the event timing,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the deployment shape,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the technical conventions.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means every change is more mentally expensive than it should be.&lt;/p&gt;

&lt;p&gt;And because salaries are the dominant cost in software, &lt;strong&gt;cognitive inefficiency is financial inefficiency&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The architecture becomes a second problem&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;At some point, the software is no longer difficult because the business is difficult.&lt;/p&gt;

&lt;p&gt;It is difficult because the architecture has become a second problem layered on top of the first.&lt;/p&gt;

&lt;p&gt;The system is now solving:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;the business domain, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the consequences of its own design choices.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is pure waste.&lt;/p&gt;

&lt;p&gt;And because most teams never built the tractor version, they often do not even realize how much of their effort is going into supporting the machine rather than solving the problem.&lt;/p&gt;

&lt;p&gt;That is the uniqueness trap again.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The most expensive architecture is not the one that fails immediately&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is the one that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;works just enough,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;survives just long enough,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and obscures its own cost just well enough&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;that nobody ever questions whether the machine was appropriate in the first place.&lt;/p&gt;

&lt;p&gt;That is what makes framework-first architecture so dangerous.&lt;/p&gt;

&lt;p&gt;It often does not fail loudly.&lt;/p&gt;

&lt;p&gt;It succeeds &lt;strong&gt;expensively&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And that is much worse.&lt;/p&gt;

&lt;p&gt;Because visible failure can trigger redesign.&lt;/p&gt;

&lt;p&gt;But expensive success gets institutionalized.&lt;/p&gt;

&lt;p&gt;It becomes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;“our platform,”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“our standard architecture,”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“our scalable foundation,”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;“our engineering maturity.”&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When in reality, it may just be a Ferrari that the organization has spent five years trying to teach to plow a field.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The first responsibility of software architecture is not scalability&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;It is not flexibility.&lt;br&gt;&lt;br&gt;
It is not “future-proofing.”&lt;br&gt;&lt;br&gt;
It is not pattern compliance.&lt;br&gt;&lt;br&gt;
It is not cloud nativeness.&lt;br&gt;&lt;br&gt;
It is not distributed elegance.&lt;/p&gt;

&lt;p&gt;It is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;to make the essential complexity of the business explicit, cohesive, and understandable.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is the job.&lt;/p&gt;

&lt;p&gt;Everything else comes later.&lt;/p&gt;

&lt;p&gt;And if the software cannot explain the business clearly through its model, then it is not well architected — no matter how many services, handlers, events, buses, frameworks, or diagrams surround it.&lt;/p&gt;

&lt;p&gt;Because at that point, the architecture is no longer serving the business.&lt;/p&gt;

&lt;p&gt;The business is serving the architecture.&lt;/p&gt;

&lt;p&gt;And that is why so much modern software is too expensive and too brittle.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;A much better default&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A better architectural instinct is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Do not ask what architecture you can build.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask what architecture the domain actually justifies.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And if the answer is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;smaller,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more cohesive,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;more local,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;less distributed,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;less framework-driven,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;and more explicit in its model&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;than current fashion prefers, that is not a sign of immaturity.&lt;/p&gt;

&lt;p&gt;It is often a sign that the problem is finally being understood.&lt;/p&gt;

&lt;p&gt;The next time a team is asked to “choose an architecture,” the first question should not be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Which framework?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which pattern?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which cloud primitive?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which service template?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It should be:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What is the business, and what is the cheapest, most coherent way to represent it truthfully?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because software does not become expensive and brittle by accident.&lt;/p&gt;

&lt;p&gt;It becomes expensive and brittle when teams choose machinery before they understand the work.&lt;/p&gt;

&lt;p&gt;And from that point on, they do not just have a domain to solve.&lt;/p&gt;

&lt;p&gt;They also have an architecture to survive.&lt;/p&gt;

&lt;p&gt;That is not engineering maturity.&lt;/p&gt;

&lt;p&gt;That is paying interest on a design mistake.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>cqrs</category>
      <category>eventdriven</category>
      <category>ddd</category>
    </item>
    <item>
      <title>When CI/CD Becomes the Goal: The Quiet Erosion of Engineering Ownership</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Mon, 30 Mar 2026 06:04:01 +0000</pubDate>
      <link>https://dev.to/leonpennings/when-cicd-becomes-the-goal-the-quiet-erosion-of-engineering-ownership-3006</link>
      <guid>https://dev.to/leonpennings/when-cicd-becomes-the-goal-the-quiet-erosion-of-engineering-ownership-3006</guid>
      <description>&lt;p&gt;Software delivery has become one of the most ritualized practices in modern development.&lt;/p&gt;

&lt;p&gt;Pipelines are longer.&lt;br&gt;&lt;br&gt;
Checks are stricter.&lt;br&gt;&lt;br&gt;
Deployments are more automated.&lt;br&gt;&lt;br&gt;
Dashboards are greener than ever.&lt;/p&gt;

&lt;p&gt;Yet in many teams, software has not become more engineered.&lt;/p&gt;

&lt;p&gt;It has become more processed.&lt;/p&gt;

&lt;p&gt;That distinction matters.&lt;/p&gt;

&lt;p&gt;CI/CD was never intended as an excuse to pile machinery on top of weak engineering. It started as a practical response to real problems. But somewhere along the way, much of the industry stopped using it to support strong engineering and began using it to compensate for its absence.&lt;/p&gt;

&lt;p&gt;That is where things quietly went wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  What CI Originally Solved
&lt;/h2&gt;

&lt;p&gt;The original idea behind Continuous Integration was straightforward.&lt;/p&gt;

&lt;p&gt;It was never primarily about pipelines, YAML, or branch policies. It was about forcing reality into the room early.&lt;/p&gt;

&lt;p&gt;Developers were expected to integrate frequently — often daily — into a shared codebase. The goal was simple: prevent teams from drifting into parallel worlds and discovering too late that their work didn’t fit together.&lt;/p&gt;

&lt;p&gt;That solved a real problem.&lt;/p&gt;

&lt;p&gt;Frequent integration forced teams to confront overlap, collisions, ambiguity, and unintended coupling while the cost of correction was still low. But CI did something subtler and arguably more important: it reinforced the team while development was still happening.&lt;/p&gt;

&lt;p&gt;Developers didn’t merely discover each other’s work after the fact. They had to continuously adapt to one another’s choices, assumptions, and interpretations of the system in the moment. That pressure was not a flaw. It was the point.&lt;/p&gt;

&lt;p&gt;This is how engineering sharpens itself — not by letting everyone disappear into isolated implementation tunnels and comparing answers at the end, but by shaping and correcting each other during the act of construction. Real engineering teams do not just divide work. They reinforce shared understanding.&lt;/p&gt;

&lt;p&gt;Original CI made integration a living team concern rather than a delayed administrative event.&lt;/p&gt;

&lt;p&gt;That was healthy engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Continuous Delivery Originally Solved
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery was aimed at a different concern than CI.&lt;/p&gt;

&lt;p&gt;Not integration itself, but the path from integrated code to running software.&lt;/p&gt;

&lt;p&gt;And to be fair, that was not a fake concern.&lt;/p&gt;

&lt;p&gt;But it also was not universally the disaster modern delivery culture sometimes pretends it was.&lt;/p&gt;

&lt;p&gt;In many Java systems, deployment was already fairly boring. An application server was stopped, a WAR or EAR was replaced, the instance was restarted, and the system was verified. That was not always elegant, but neither was it some fundamental engineering crisis.&lt;/p&gt;

&lt;p&gt;So the real value of CD was not that it magically solved an impossible deployment problem.&lt;/p&gt;

&lt;p&gt;Its promise was narrower and more practical: to make the release path more repeatable, more standardized, less person-dependent, and easier to execute consistently across teams and environments.&lt;/p&gt;

&lt;p&gt;That is a reasonable goal.&lt;/p&gt;

&lt;p&gt;And in some environments, it becomes more than reasonable — it becomes necessary.&lt;/p&gt;

&lt;p&gt;Once deployments span multiple machines, rolling restarts, clustered services, or orchestrated server fleets, manual deployment stops being merely inconvenient and starts becoming operationally impractical. At that point, automation is not theater. It is simply the sane way to move software safely and consistently.&lt;/p&gt;

&lt;p&gt;That is where CD has real value.&lt;/p&gt;

&lt;p&gt;But not all release friction was technical.&lt;/p&gt;

&lt;p&gt;In many organizations, a significant part of the “deployment problem” came from the surrounding structure itself: separate infrastructure departments, ticket-driven handoffs, release scheduling rituals, and operational processes that turned even simple deployments into expensive coordination exercises.&lt;/p&gt;

&lt;p&gt;That pain was real — but it is important to name it accurately.&lt;/p&gt;

&lt;p&gt;Often, the difficulty was not in replacing the software.&lt;/p&gt;

&lt;p&gt;It was in navigating the organization around it.&lt;/p&gt;

&lt;p&gt;Modern delivery automation did remove a great deal of that friction.&lt;/p&gt;

&lt;p&gt;But in many cases, the underlying pattern did not disappear. It simply moved.&lt;/p&gt;

&lt;p&gt;Where infrastructure teams once controlled servers and release windows, platform and pipeline teams now increasingly control the mechanics of delivery itself. The form changed. The separation often did not.&lt;/p&gt;

&lt;p&gt;And that matters more than it first appears.&lt;/p&gt;

&lt;p&gt;Because once the release path is defined by people who do not carry the semantic or business consequences of the software, the pipeline can quietly become a surrogate for ownership.&lt;/p&gt;

&lt;p&gt;That is where the trade-offs began.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where It Started to Go Wrong
&lt;/h2&gt;

&lt;p&gt;The issue is not that CI/CD solved fake problems. The issue is that much of the industry adopted the tooling and rituals while quietly abandoning the engineering assumptions that gave those practices their value.&lt;/p&gt;

&lt;p&gt;Once that happened, CI/CD stopped reinforcing good engineering and started compensating for weak engineering instead.&lt;/p&gt;

&lt;p&gt;A lot of what now passes for “CI” is no longer continuous integration.&lt;/p&gt;

&lt;p&gt;It is deferred reconciliation.&lt;/p&gt;

&lt;p&gt;Developers work in isolation on long-lived branches, treating the merge as the first serious moment of contact with the rest of the system. The pain that CI was designed to expose early is now allowed to accumulate until the branch is “ready.” The pipeline creates the illusion of discipline, but the underlying practice has shifted.&lt;/p&gt;

&lt;p&gt;The old model forced developers to adapt to each other continuously.&lt;/p&gt;

&lt;p&gt;The modern branch-heavy model lets them adapt only at the end.&lt;/p&gt;

&lt;p&gt;What makes this regression more serious is that it did not happen accidentally. In many teams, CI was gradually reshaped to serve a different goal: continuous deployment of independently developed changes.&lt;/p&gt;

&lt;p&gt;That sounds efficient, but it came with a structural trade-off.&lt;/p&gt;

&lt;p&gt;In order to deploy “each feature” continuously, work first had to become isolatable. That pushed development toward branch-based workflows, delayed integration, and feature-level thinking. The unit of progress stopped being the continuously evolving shared system and became the individually shippable change.&lt;/p&gt;

&lt;p&gt;And once that shift happened, CI changed with it.&lt;/p&gt;

&lt;p&gt;What used to be immediate feedback on a real check-in against the shared codebase became a staged validation process around isolated work. The branch is tested. The pull request is reviewed. The pipeline is green. But the fully integrated system — in motion, under changing conditions, with multiple real changes meeting each other — is often encountered meaningfully much later.&lt;/p&gt;

&lt;p&gt;That is not a small process adjustment.&lt;/p&gt;

&lt;p&gt;It is a relocation of feedback.&lt;/p&gt;

&lt;p&gt;And when feedback moves later, risk moves with it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Social Cost Nobody Mentions
&lt;/h2&gt;

&lt;p&gt;This changes far more than code flow.&lt;/p&gt;

&lt;p&gt;It changes the social structure of development itself.&lt;/p&gt;

&lt;p&gt;Instead of reinforcing each other during construction, developers increasingly become delayed reviewers, test-runners, or approval gates. The shared act of building gives way to a serialized process of isolated work followed by late validation.&lt;/p&gt;

&lt;p&gt;That may still produce working software, but it does not produce the same quality of team thinking.&lt;/p&gt;

&lt;p&gt;The old model created friction early, while people were still shaping the solution together. The newer model often postpones that friction until after mental commitment has set in. At that point, integration becomes negotiation rather than collaboration.&lt;/p&gt;

&lt;p&gt;That is a significant regression.&lt;/p&gt;

&lt;p&gt;A team stops behaving like a team.&lt;/p&gt;

&lt;p&gt;It starts behaving like a collection of individuals working in parallel and negotiating reality afterward.&lt;/p&gt;

&lt;p&gt;And once that happens, the pipeline begins to replace the team as the thing that “validates” software.&lt;/p&gt;

&lt;p&gt;That is a dangerous substitution.&lt;/p&gt;

&lt;p&gt;Because a team can challenge assumptions, surface ambiguity, and expose misunderstandings while the system is still being shaped.&lt;/p&gt;

&lt;p&gt;A pipeline cannot.&lt;/p&gt;

&lt;p&gt;It can only tell you whether a predefined process passed.&lt;/p&gt;

&lt;p&gt;It cannot tell you whether the software still makes sense.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Illusion of Delivery Maturity
&lt;/h2&gt;

&lt;p&gt;Continuous Delivery has suffered a parallel fate.&lt;/p&gt;

&lt;p&gt;In theory, CD makes deployments safe by making them repeatable. In practice, many teams achieve “safety” by surrounding brittle systems with ever-growing layers of process, abstraction, and automation. The application becomes harder to understand. The deployment model grows more complex. And the pipeline swells to absorb complexity that should never have existed in the software itself.&lt;/p&gt;

&lt;p&gt;Eventually, the release system becomes more elaborate than the software it delivers.&lt;/p&gt;

&lt;p&gt;This raises an uncomfortable question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are we automating a healthy system, or are we automating around an unhealthy one?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If deployment is difficult, brittle, or mysterious, there are usually only two explanations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The system genuinely operates in a complex environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The software was never designed with operability in mind.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The first is sometimes unavoidable.&lt;/p&gt;

&lt;p&gt;The second is too often ignored.&lt;/p&gt;




&lt;h2&gt;
  
  
  Good Deployment Begins in Design
&lt;/h2&gt;

&lt;p&gt;Much deployment pain is treated as the inevitable cost of “modern systems.” In many business applications, that pain is not inevitable — it is designed in.&lt;/p&gt;

&lt;p&gt;A well-engineered application should be deployable because it was built to be deployable: operational state kept where it belongs, environment-specific behavior minimized, startup made deterministic, migrations treated as part of the lifecycle, and only what truly needs to vary externalized.&lt;/p&gt;

&lt;p&gt;When deployment is simple by design, the need for pipeline heroics drops dramatically.&lt;/p&gt;

&lt;p&gt;Automation then becomes what it was meant to be: a way to remove repetition and error from a sound process — not a bandage for an unsound one.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dangerous Slide into “Production as Test Environment”
&lt;/h2&gt;

&lt;p&gt;This is where the earlier shift becomes dangerous.&lt;/p&gt;

&lt;p&gt;When integration is no longer happening continuously during development, reality does not disappear.&lt;/p&gt;

&lt;p&gt;It simply waits.&lt;/p&gt;

&lt;p&gt;And increasingly, that reality is encountered much later — often in environments close to or inside production.&lt;/p&gt;

&lt;p&gt;This is why so many modern delivery models quietly drift toward using production as their final validation environment. Not because teams explicitly decide to “test in production,” but because the system as a whole often meets changing real-world conditions there more meaningfully than anywhere before it.&lt;/p&gt;

&lt;p&gt;That is a very different feedback model from original CI.&lt;/p&gt;

&lt;p&gt;Original CI gave teams rapid feedback on check-ins against a shared and continuously evolving codebase. Modern branch-heavy CI/CD often gives rapid feedback on isolated changes, then relies on deployment frequency to surface what only the integrated whole can reveal.&lt;/p&gt;

&lt;p&gt;That is not the same kind of safety.&lt;/p&gt;

&lt;p&gt;It is simply a different place to discover reality.&lt;/p&gt;

&lt;p&gt;Smaller and faster deployments are often presented as inherently safer.&lt;/p&gt;

&lt;p&gt;But that is only true if one quietly assumes that the meaning and impact of a change are already well understood.&lt;/p&gt;

&lt;p&gt;In practice, that is often exactly what is not true.&lt;/p&gt;

&lt;p&gt;A smaller deployment unit may reduce rollback scope or make blame attribution easier, but that is not the same as reducing actual engineering risk. If anything, the opposite can happen: the change is seen by fewer people, discussed less deeply, and integrated less continuously before it reaches production.&lt;/p&gt;

&lt;p&gt;That does not reduce uncertainty.&lt;/p&gt;

&lt;p&gt;It merely packages uncertainty into smaller increments.&lt;/p&gt;

&lt;p&gt;And when production changes multiple times per day, stability itself begins to shrink. The system is only as stable as the scenarios already captured in the automated tests — tests which are themselves usually adapted to the most recent expected path into production.&lt;/p&gt;

&lt;p&gt;That creates a dangerous illusion of control.&lt;/p&gt;

&lt;p&gt;The software appears validated, but only within the shrinking boundary of what was recently anticipated.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Semantic Risk Pipelines Cannot See
&lt;/h2&gt;

&lt;p&gt;More importantly, the true impact of a change is often not visible from the code itself.&lt;/p&gt;

&lt;p&gt;A seemingly trivial modification for a developer can carry major domain consequences. And a technically substantial change can sometimes be domain-trivial. That asymmetry matters.&lt;/p&gt;

&lt;p&gt;Because developers are not domain experts.&lt;/p&gt;

&lt;p&gt;They can understand the implementation, but they cannot reliably infer the full business meaning of a change from code alone — not without sustained discussion and feedback from people who actually understand the domain.&lt;/p&gt;

&lt;p&gt;And the most dangerous part is that this is not predictable.&lt;/p&gt;

&lt;p&gt;It is not true that every change requires deep domain validation.&lt;/p&gt;

&lt;p&gt;But it is also not reliably obvious which changes do.&lt;/p&gt;

&lt;p&gt;That is exactly why semantic risk cannot be reduced to diff size, deployment frequency, or pipeline confidence.&lt;/p&gt;

&lt;p&gt;Many of the hardest failures are not technical crashes or exceptions. They are semantic failures: the system behaves exactly as the code and tests dictate, yet wrongly according to the business.&lt;/p&gt;

&lt;p&gt;That is where domain experts matter.&lt;/p&gt;

&lt;p&gt;And no amount of deployment frequency changes that fact.&lt;/p&gt;




&lt;h2&gt;
  
  
  Human Validation Is Not the Enemy of Engineering
&lt;/h2&gt;

&lt;p&gt;One of the stranger modern assumptions is that removing human judgment from the release path is always progress.&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;There is a crucial difference between automating repeatable mechanics and eliminating deliberate validation. Those should never be conflated.&lt;/p&gt;

&lt;p&gt;A strong delivery process should automate the mechanical parts — build, package, verify, deploy to controlled environments, reproduce release steps consistently.&lt;/p&gt;

&lt;p&gt;That is sensible.&lt;/p&gt;

&lt;p&gt;But whether a business-critical change should be exposed to real users is not always a purely technical question. In many systems, it is also a domain question.&lt;/p&gt;

&lt;p&gt;Human validation is not a sign of immaturity. Sometimes it is the last remaining sign that someone still understands the difference between technical correctness and business correctness.&lt;/p&gt;

&lt;p&gt;That distinction is too often lost.&lt;/p&gt;




&lt;h2&gt;
  
  
  Application Quality Is Not Generated By Tooling
&lt;/h2&gt;

&lt;p&gt;Part of the problem is that “quality” itself has increasingly been redefined through the lens of tooling.&lt;/p&gt;

&lt;p&gt;In many organizations, delivery practices are no longer primarily shaped by engineers with deep ownership of the software and its domain. They are shaped by process-specialized roles, platform teams, and tooling consultants whose authority often comes from familiarity with delivery systems rather than from responsibility for the software’s behavior, design, or business consequence.&lt;/p&gt;

&lt;p&gt;That changes what gets optimized.&lt;/p&gt;

&lt;p&gt;Quality slowly stops meaning clarity, simplicity, robustness, and domain correctness.&lt;/p&gt;

&lt;p&gt;It starts meaning compliance: green pipelines, approved stages, scan completion, branch policy adherence, and process conformance.&lt;/p&gt;

&lt;p&gt;Those may be useful signals.&lt;/p&gt;

&lt;p&gt;But useful signals can become dangerous substitutes.&lt;/p&gt;

&lt;p&gt;And that is how problem analysis gets replaced by cargo cults.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Regression: Loss of Ownership
&lt;/h2&gt;

&lt;p&gt;Underneath all of this lies a deeper problem than pipelines or deployment buttons.&lt;/p&gt;

&lt;p&gt;The quiet regression is the loss of engineering ownership.&lt;/p&gt;

&lt;p&gt;Modern delivery culture has made it increasingly possible for developers to produce deployable software without truly understanding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;how the system runs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it is released&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it evolves&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it fails&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it behaves in production&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how it fits the business domain as a whole&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not progress.&lt;/p&gt;

&lt;p&gt;That is separation from consequence.&lt;/p&gt;

&lt;p&gt;Once that separation occurs, the pipeline stops being a tool.&lt;/p&gt;

&lt;p&gt;It becomes a substitute for engineering responsibility.&lt;/p&gt;

&lt;p&gt;Pipelines can tell you whether something passed the process.&lt;/p&gt;

&lt;p&gt;They cannot tell you whether the software is truly understood.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Healthy CI/CD Should Actually Look Like
&lt;/h2&gt;

&lt;p&gt;Good CI/CD is not about maximum automation.&lt;/p&gt;

&lt;p&gt;It is about preserving engineering discipline while reducing mechanical waste.&lt;/p&gt;

&lt;p&gt;That usually looks far less glamorous than modern tooling culture suggests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Developers integrate continuously into a shared mainline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Incomplete work is handled through discipline and design, not default branch isolation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build and verification are automated and fast&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deployment to lower environments is repeatable and low-friction&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Acceptance happens in a controlled way&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Production deployment is simple enough to trust&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Human validation exists where domain risk justifies it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The release path is designed to support ownership, not replace it&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not anti-automation.&lt;/p&gt;

&lt;p&gt;It is anti-theater.&lt;/p&gt;

&lt;p&gt;And that distinction matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Question
&lt;/h2&gt;

&lt;p&gt;CI/CD is not really a tooling question.&lt;/p&gt;

&lt;p&gt;It is a quality question.&lt;/p&gt;

&lt;p&gt;The real issue is not whether a team has pipelines, feature flags, deployment jobs, or environment promotion stages.&lt;/p&gt;

&lt;p&gt;The real issue is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Does the delivery process reflect a well-engineered system and a team that understands it — or is it compensating for the absence of both?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is the question most teams avoid.&lt;/p&gt;

&lt;p&gt;Because if the honest answer is the second one, then the pipeline is not a sign of maturity.&lt;/p&gt;

&lt;p&gt;It is camouflage.&lt;/p&gt;

&lt;p&gt;And that may be the most uncomfortable truth in modern software delivery:&lt;/p&gt;

&lt;p&gt;sometimes what looks like engineering progress is really just process growth around declining engineering depth.&lt;/p&gt;

&lt;p&gt;CI/CD used as a substitute for the very discipline it was supposed to support.&lt;/p&gt;

&lt;p&gt;And once that happens, delivery stops being an expression of engineering quality.&lt;br&gt;&lt;br&gt;
It becomes a process for moving misunderstood software into production more efficiently.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>cicd</category>
      <category>java</category>
      <category>software</category>
    </item>
    <item>
      <title>Software Testing: You’re Probably Doing It Wrong</title>
      <dc:creator>Leon Pennings</dc:creator>
      <pubDate>Thu, 26 Mar 2026 08:32:29 +0000</pubDate>
      <link>https://dev.to/leonpennings/software-testing-youre-probably-doing-it-wrong-564h</link>
      <guid>https://dev.to/leonpennings/software-testing-youre-probably-doing-it-wrong-564h</guid>
      <description>&lt;p&gt;Software testing has become one of the most ritualized practices in modern development.&lt;/p&gt;

&lt;p&gt;That is not because testing is unimportant. Quite the opposite.&lt;/p&gt;

&lt;p&gt;Testing matters.&lt;/p&gt;

&lt;p&gt;But in many teams, testing has quietly expanded beyond its actual role. It is no longer treated as a tool for verifying software behavior. It is increasingly treated as a proxy for understanding, a proxy for design, and even a proxy for quality itself.&lt;/p&gt;

&lt;p&gt;And that is where the problem begins.&lt;/p&gt;

&lt;p&gt;Because testing can verify behavior.&lt;br&gt;&lt;br&gt;
But it cannot replace engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Testing in Software Is a Verification Discipline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;At its core, testing in software has a very specific role:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;to verify whether a system behaves acceptably under certain conditions.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is valuable. Necessary, even.&lt;/p&gt;

&lt;p&gt;A good test can help answer questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Does this behavior still work?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Does this input still lead to the expected output?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Did this change introduce a regression?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is where testing is strong.&lt;/p&gt;

&lt;p&gt;But notice what testing does &lt;strong&gt;not&lt;/strong&gt; answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Is the design coherent?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the architecture proportional to the problem?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the model a good representation of the domain?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is this implementation economical to evolve?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are engineering questions.&lt;/p&gt;

&lt;p&gt;And when teams start treating test suites as if they answer them, behavioral verification gets confused with software quality itself.&lt;/p&gt;

&lt;p&gt;That is a costly mistake.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Math Test Problem&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A great deal of modern team testing resembles cheating on a math exam.&lt;/p&gt;

&lt;p&gt;Imagine students defining the exam questions during class, together with the teacher, while learning the material. By the time the exam arrives, the goal is no longer to understand the mathematics. The goal is to reproduce the answers that were already agreed upon.&lt;/p&gt;

&lt;p&gt;Something very similar happens in software teams.&lt;/p&gt;

&lt;p&gt;During refinement, development, or collaborative scenario-writing sessions, expected behavior is often defined in detail in advance. Tests are written, scenarios are formalized, and the team aligns around them.&lt;/p&gt;

&lt;p&gt;In theory, this sounds excellent.&lt;/p&gt;

&lt;p&gt;In practice, it introduces a subtle distortion:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;the implementation target shifts from understanding the business domain to passing the agreed test scenarios.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is a very different goal.&lt;/p&gt;

&lt;p&gt;The result is not necessarily a bad system. But it is often a system optimized for compliance rather than understanding.&lt;/p&gt;

&lt;p&gt;And the danger is obvious:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;how often is the first interpretation of a business need fully correct?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If the test scenarios are based on incomplete understanding, then all the rigor in the world only helps build the wrong thing more reliably.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Verification Is Not Validation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the distinction many teams lose.&lt;/p&gt;

&lt;p&gt;Testing is very good at &lt;strong&gt;verification&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Did the implementation behave as intended?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Does the system still behave as expected?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But verification is not the same as &lt;strong&gt;validation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Was the right thing built?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is this actually a fitting solution for the domain?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A system can satisfy every agreed scenario and still be fundamentally wrong.&lt;/p&gt;

&lt;p&gt;It can behave correctly while being poorly modeled.&lt;br&gt;&lt;br&gt;
It can produce the expected output while being overcomplicated.&lt;br&gt;&lt;br&gt;
It can pass every acceptance test while solving the wrong problem in the wrong way.&lt;/p&gt;

&lt;p&gt;In other words:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A passing test suite proves behavioral agreement—not solution fitness.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And that distinction matters far more than many teams admit.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Ferrari in the Field&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A Ferrari F40 can absolutely move across a field.&lt;/p&gt;

&lt;p&gt;It can produce motion. It can get from one side to the other. It can, in the most literal sense, “do the job.”&lt;/p&gt;

&lt;p&gt;That does not make it a tractor.&lt;/p&gt;

&lt;p&gt;The same is true in software.&lt;/p&gt;

&lt;p&gt;A system can satisfy all functional expectations and still be the wrong machine for the domain. It can be too expensive to change, too fragile to extend, too over-engineered for the actual need, or too structurally rigid to survive evolving business requirements.&lt;/p&gt;

&lt;p&gt;Testing does not expose that.&lt;/p&gt;

&lt;p&gt;Because testing can tell whether the machine moves.&lt;/p&gt;

&lt;p&gt;It cannot tell whether it is the right machine.&lt;/p&gt;

&lt;p&gt;And that is not a trivial distinction.&lt;br&gt;&lt;br&gt;
That is the distinction between &lt;strong&gt;working software&lt;/strong&gt; and &lt;strong&gt;good engineering&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;When Tests Stop Following Behavior and Start Following Structure&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is where testing often becomes actively harmful.&lt;/p&gt;

&lt;p&gt;If testing is a behavioral verification discipline, then it should limit itself to verifying behavior.&lt;/p&gt;

&lt;p&gt;But many modern testing practices go deeper than that.&lt;/p&gt;

&lt;p&gt;They start testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;local call structures&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;internal collaborations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;class-level decomposition&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;implementation fragments in isolation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At that point, the tests are no longer verifying the system in any meaningful way.&lt;/p&gt;

&lt;p&gt;They are verifying the current shape of the code.&lt;/p&gt;

&lt;p&gt;That is not the same thing.&lt;/p&gt;

&lt;p&gt;And once that happens, the test suite stops protecting change and starts resisting it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The moment a test depends on how the behavior is achieved instead of what behavior is observed, it becomes a brake on refactoring.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is one of the most under-discussed quality problems in software teams.&lt;/p&gt;

&lt;p&gt;Because now every structural improvement becomes expensive:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;rename a collaborator → tests break&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;merge responsibilities → tests break&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;simplify orchestration → tests break&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;move logic to a better abstraction → tests break&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not because behavior changed.&lt;br&gt;&lt;br&gt;
But because the test suite was never really about behavior to begin with.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Why Isolated Class Testing Often Misses the Point&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the clearest examples of this problem is isolated class testing.&lt;/p&gt;

&lt;p&gt;A class exists in code. Therefore, many teams assume it should be testable independently.&lt;/p&gt;

&lt;p&gt;But a technical unit is not automatically a meaningful behavioral unit.&lt;/p&gt;

&lt;p&gt;That assumption is rarely challenged.&lt;/p&gt;

&lt;p&gt;Take something like a PDF information extractor.&lt;/p&gt;

&lt;p&gt;That behavior does not meaningfully exist in a vacuum. It depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;parsing logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;normalization logic&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;extraction rules&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;object interpretation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;domain-level decisions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Yet what often happens?&lt;/p&gt;

&lt;p&gt;A single class gets tested in isolation.&lt;br&gt;&lt;br&gt;
Its collaborators are mocked.&lt;br&gt;&lt;br&gt;
Its environment is simulated.&lt;br&gt;&lt;br&gt;
Its context is stripped away.&lt;/p&gt;

&lt;p&gt;Now the test no longer asks:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Can the system reliably extract useful information from PDFs?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead, it asks something far weaker:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Does this one implementation fragment behave under synthetic scaffolding?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is not meaningful verification.&lt;/p&gt;

&lt;p&gt;That is structural rehearsal.&lt;/p&gt;

&lt;p&gt;And the cost is not just conceptual—it is practical.&lt;/p&gt;

&lt;p&gt;Because now the test suite is coupled to a local decomposition that may not even survive the next decent refactor.&lt;/p&gt;

&lt;p&gt;We end up with a test suite that passes perfectly even if the integration between those fragments is fundamentally broken—because we’ve tested the components, but ignored the composition.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Coverage Is Not Confidence&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Test coverage is another example of verification ritual turning into proxy engineering.&lt;/p&gt;

&lt;p&gt;Coverage has become a metric in its own right.&lt;/p&gt;

&lt;p&gt;Teams report it. Managers ask for it. Pipelines display it as if it were a signal of quality.&lt;/p&gt;

&lt;p&gt;But coverage says only one thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;this code was executed while a test ran.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That’s it.&lt;/p&gt;

&lt;p&gt;It does &lt;strong&gt;not&lt;/strong&gt; tell:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;whether the test is meaningful&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether important behavior is protected&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether the assertions matter&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;whether the design is safe to evolve&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And yet teams optimize for it anyway.&lt;/p&gt;

&lt;p&gt;That leads to the usual absurdities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;getter/setter tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;trivial constructor tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;one-line branch inflation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;synthetic assertions written only to satisfy the metric&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is not quality.&lt;br&gt;&lt;br&gt;
It is administrative theater.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Coverage is a measure of execution, not a measure of insight.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And once a team starts chasing the number instead of the confidence, the metric has already failed.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Testing Is Not a Design Discipline&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This may be the most important point of all.&lt;/p&gt;

&lt;p&gt;Testing can verify whether software behaves as expected.&lt;/p&gt;

&lt;p&gt;It cannot tell whether the software is well-designed.&lt;/p&gt;

&lt;p&gt;It cannot tell whether:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the abstraction boundaries are good&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the model is coherent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the architecture is sustainable&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the implementation cost is proportional to the value&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;future stories will remain easy to add&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not test outcomes.&lt;/p&gt;

&lt;p&gt;Those are design and engineering concerns.&lt;/p&gt;

&lt;p&gt;And if a team replaces those concerns with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;framework templates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;scenario scripts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;coverage thresholds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pipeline greenness&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…then better engineering is not happening.&lt;/p&gt;

&lt;p&gt;Judgment is simply being outsourced to artifacts.&lt;/p&gt;

&lt;p&gt;That may feel safer.&lt;br&gt;&lt;br&gt;
It may even look more rigorous.&lt;/p&gt;

&lt;p&gt;But it is still a substitute for actual thought.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;What Testing Is Actually For&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Testing does have a real and valuable place.&lt;/p&gt;

&lt;p&gt;Used well, testing is for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;verifying externally observable behavior&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;protecting against meaningful regressions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;increasing confidence during change&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;supporting safe evolution of a system&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is already enough.&lt;/p&gt;

&lt;p&gt;Testing does &lt;strong&gt;not&lt;/strong&gt; need to become:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a replacement for design&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a replacement for domain understanding&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a replacement for architecture&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;a replacement for engineering judgment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The moment testing is asked to do those things, it becomes overloaded.&lt;/p&gt;

&lt;p&gt;And overloaded tools do not become more powerful.&lt;/p&gt;

&lt;p&gt;They become more misleading.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;The Cost of a Misaligned System&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A system does not need to be broken to be expensive.&lt;/p&gt;

&lt;p&gt;It only needs to be misaligned.&lt;/p&gt;

&lt;p&gt;That is one of the most dangerous illusions in software development: if the system behaves correctly, it is easy to assume the engineering must also be sound.&lt;/p&gt;

&lt;p&gt;But a system can pass tests, satisfy stories, and still be fundamentally costly in all the places that matter over time.&lt;/p&gt;

&lt;p&gt;It can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;too expensive to extend&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;too brittle to refactor&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;too complex to reason about&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;too rigid to absorb new requirements cleanly&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the software equivalent of using a Ferrari F40 to plow a field.&lt;/p&gt;

&lt;p&gt;The machine moves.&lt;br&gt;&lt;br&gt;
The task gets completed.&lt;br&gt;&lt;br&gt;
But every future change becomes more expensive than it should be.&lt;/p&gt;

&lt;p&gt;That cost rarely appears in the first implementation. It appears later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;in slower feature development&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in rising maintenance effort&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in increasingly fragile changes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;in the growing difficulty of correcting earlier assumptions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And this is precisely where testing, on its own, offers very little protection.&lt;/p&gt;

&lt;p&gt;Because testing can confirm that a system still behaves the same.&lt;/p&gt;

&lt;p&gt;It cannot tell whether that behavior is now trapped inside the wrong machine.&lt;/p&gt;

&lt;p&gt;That is an engineering problem.&lt;/p&gt;

&lt;p&gt;And when that distinction is missed, software quality gets reduced to present-day correctness while long-term adaptability quietly deteriorates.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Software engineering has become increasingly comfortable with proxies.&lt;/p&gt;

&lt;p&gt;Metrics are used as substitutes for judgment.&lt;br&gt;&lt;br&gt;
Artifacts are used as substitutes for understanding.&lt;br&gt;&lt;br&gt;
Test suites are used as substitutes for design confidence.&lt;/p&gt;

&lt;p&gt;And in doing so, many teams create the appearance of rigor while quietly undermining the adaptability of the system itself.&lt;/p&gt;

&lt;p&gt;Testing is valuable.&lt;br&gt;&lt;br&gt;
But only when it stays in its lane.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Testing should verify software behavior. It should not define the software, freeze its structure, or pretend to certify its design.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because the moment verification starts replacing engineering, pipelines may still signal green — but better systems do not follow.&lt;/p&gt;

&lt;p&gt;Ferraris get built where tractors would have been enough.&lt;/p&gt;

</description>
      <category>softwareengineering</category>
      <category>testing</category>
      <category>java</category>
      <category>softwaredevelopment</category>
    </item>
  </channel>
</rss>
