DEV Community

Leon Pennings
Leon Pennings

Posted on • Originally published at blog.leonpennings.com

Rich Domain Model Monoliths vs. “Modern” Architectures — Are You Rich Enough to Go Modern?

The industry loves to argue about monoliths vs. microservices, DDD vs. simple CRUD, “modern” frameworks vs. “legacy” architecture. Every few years the pendulum swings, but one thing has remained constant:

Most companies adopt modern architectures long before they have the scale, need, or competence to justify them.

This isn’t a moral failure of developers. It’s a failure of economic reasoning.

So let’s cut through the noise and look at the real costs of building systems without a rich domain model—and why you pay a staggering premium when you rely on frameworks, microservices, and architectural fashion trends instead of engineering discipline.


1. Accidental Complexity vs. Domain Clarity

When you don’t model the domain explicitly, your architecture naturally fills with:

  • DTO jungles

  • mappers everywhere

  • service–service–repository–controller boilerplate

  • util classes nobody remembers writing

  • glue code that exists only to satisfy the framework

Developers spend weeks learning the scaffolding before they understand the business logic.

A rich domain model flips this:

  • Code reflects business concepts directly.

  • Everything has a single conceptual home.

  • New engineers onboard by learning the domain, not the framework.

Rich Domain Models Don’t Add Complexity — They Remove It

A common misconception is that rich domain models imply heavy architecture.

In reality, the opposite is true.

A rich domain model — when done correctly — produces an architecture that is simpler, smaller, flatter, and far easier to understand.

The real complexity comes from the other stuff:

  • endless Spring abstractions,

  • CQRS everywhere “because that’s the pattern,”

  • event sourcing for routine CRUD,

  • pipelines, DTO forests, mapper layers,

  • “hexagonal” ceremony done by the book rather than by the domain.

None of these are inherently bad — but the moment they become the centrepiece instead of the domain, the architecture explodes in size and mental overhead.

A true rich domain model collapses these unnecessary constructions because:

  • behaviour lives with data,

  • invariants live where they belong,

  • duplication disappears,

  • logic becomes explicit,

  • the code explains the business,

  • and the need for defensive layers vanishes.

So the corrected principle is:

The more domain logic you model in the domain, the simpler the architecture becomes.

The more you “let the tools lead,” the more accidental complexity grows.

A behaviour-rich monolith is not “complex DDD.”

It's actually the simplest possible architecture that can support real business rules.

The architectural sprawl we see in many “modern” systems isn’t caused by domain modelling —

it’s caused by the absence of it.


2. Distributed Systems Multiply Work and Duplicate Logic

In a microservices architecture lacking a shared, centralized domain model, business rules and logic inevitably get fragmented and duplicated across the system. What should be a single, authoritative representation of the domain - capturing concepts like "order validation" or "user eligibility" - ends up being reimplemented in piecemeal fashion everywhere data flows. This isn't just inefficient; it's a multiplier for effort, bugs, and long-term debt.

Consider a simple example: enforcing a business rule like "a customer's total spend must exceed $100 for free shipping." Without a rich domain model to own this logic centrally:

  • API layer validation: The ingress API must check this to reject invalid requests early, often with custom validators or schema enforcers.

  • Service layer logic: The core business service reimplements the same check to ensure integrity during processing, perhaps with its own database queries or computations.

  • Downstream services: If the order flows to fulfillment or inventory services, they might duplicate the logic to avoid trusting upstream inputs, adding their own safeguards.

  • The UI: Frontend code (e.g., in React or Angular) replicates the rule for real-time feedback, like enabling/disabling a "free shipping" checkbox, complete with its own error handling.

  • ETL jobs: Batch processes extracting or transforming data for analytics must reapply the rule to maintain consistency in reports or data warehouses.

  • Data pipelines: Streaming pipelines (e.g., via Kafka) handling events might embed the logic yet again to filter or enrich data en route.

Now, multiply this duplication across dozens of services in a typical microservices setup. A single rule change - say, bumping the threshold to $150 - requires updates, testing, and deployments in multiple places. Miss one spot? You get inconsistencies, like a UI promising free shipping while the backend denies it, leading to customer complaints and emergency fixes.

This fragmentation stems from the distributed nature of microservices: each service is "autonomous," but without a unified domain model to share (e.g., via a shared library or monorepo), autonomy devolves into silos. Defensive programming becomes the norm—every component assumes others might fail or send bad data—amplifying code volume, test suites, and operational complexity.

Of course, microservices aren't inherently evil; they can shine in scenarios where domains are truly independent (e.g., a recommendation engine decoupled from core e-commerce) or face radically different scaling needs (e.g., a high-throughput search service vs. a low-volume admin dashboard). In those cases, the isolation pays off by allowing tailored tech stacks, independent deployments, and targeted scaling.

But for the vast majority—say, 90%—of internal systems, line-of-business apps, or mid-scale products? It's premature optimization at best. You're not solving real scalability bottlenecks; you're introducing a huge maintenance tax through duplicated effort, version mismatches, and integration headaches. A rich domain model in a monolith sidesteps this entirely: implement the rule once, in the domain's language, and let everything else consume it reliably. The result? Less code, fewer bugs, and changes that propagate effortlessly.


3. Coordination Overhead: The Invisible Killer

In distributed systems like microservices, even seemingly minor changes can trigger a cascade of interdependencies that consume far more resources than the actual code work itself. This "coordination overhead" is often invisible on paper—it's not in your story points or burndown charts—but it manifests as the real killer of productivity, turning quick fixes into multi-week ordeals and burning out teams with non-value-adding toil.

Let's break it down with a concrete example: suppose a business requirement shifts slightly, like adding a new field to a customer profile (e.g., "preferred shipping method") that affects order processing. In a microservices setup without strong domain boundaries:

  • API contracts: The owning service must update its API schema, potentially breaking consumers. This requires contract reviews, OpenAPI/Swagger updates, and notifications to downstream teams.

  • Versioning: To avoid outages, you introduce versioning (e.g., v1 to v2 endpoints), which means maintaining multiple API versions, handling deprecations, and ensuring backward compatibility—doubling or tripling the test surface.

  • Deployment order: Services aren't isolated; the change might require orchestrated rollouts (e.g., deploy the producer first, then consumers) to prevent runtime errors, often involving canary releases or feature flags across clusters.

  • Cross-team dependencies: If teams own different services, the change sparks coordination: Team A (owners) proposes the update; Team B (consumers) reviews and adapts; Team C (ops) handles infra implications. This dotted-line ownership leads to stalled PRs and finger-pointing.

  • Integration tests: End-to-end verification explodes—now you need tests spanning services, mocking failures, and simulating network issues, often in a shared staging environment that's always flaky.

  • CI/CD choreography: Pipelines must be synced; a simple code push now triggers multi-repo builds, security scans, and compliance gates, with failures cascading back to square one.

This ripple effect isn't just "engineering time"—it's the human cost: endless meetings to align on specs, approval bottlenecks from architects or leads, Slack threads debating edge cases, blocked work while waiting for merges, and the sheer cognitive fatigue of tracking it all. Teams lose days (or weeks) to context-switching, and morale dips as developers feel like coordinators rather than creators.

Contrast this with a well-structured monolith, where the domain model centralizes logic:

  • You update the model in one place (e.g., add the field to the Customer aggregate).

  • Run your comprehensive unit/integration tests locally or in CI.

  • Deploy the single artifact.

  • You're done—often in hours, with no cross-team drama.

No ripples, no ceremonies—just focused delivery. This simplicity is why so many teams, after experimenting with microservices, retreat back to monoliths (or modular monoliths). Public stories from companies like Segment, Amazon Prime Video, and even parts of Netflix highlight this: they consolidated services to slash overhead and regain velocity.

Of course, at massive scale—with hundreds of developers, global distribution, or true service independence—the coordination can be a worthwhile trade-off for resilience and parallelism. But unless you're operating at that level (hitting those 500M+ rows, 100K+ concurrent users, and extreme skew), the overhead simply isn't worth it. For most systems, it's a self-inflicted wound that drains budgets and talent without delivering proportional value.


4. Cognitive Load in Framework-Driven Systems

Framework-centric architectures—those where the chosen tools and patterns dictate the system's structure—often bury essential domain logic beneath layers of technical artifacts. This creates a mental maze for developers, forcing them to decipher irrelevant scaffolding before they can even grasp the business problem at hand. The result is a dramatic increase in cognitive load: the mental effort required to understand, navigate, and modify the codebase.

To illustrate, consider a typical e-commerce system built around a framework like Spring Boot or .NET Core without a strong domain focus. A simple operation, such as processing a discount code, might involve:

  • Controllers: Handling HTTP requests and routing, with logic scattered across endpoints.

  • Handlers: Custom event or command handlers that add yet another layer of indirection.

  • Entities: Anemic data classes (e.g., plain POJOs) that hold state but lack behavior, forcing logic elsewhere.

  • Repositories: Database access patterns that abstract persistence but often leak into business code.

  • Mappers: Endless converters between DTOs, entities, and view models, creating a "mapper forest" of boilerplate.

  • Service layers: Bloated classes that orchestrate everything, mixing domain rules with transaction management.

  • Interfaces for everything: Over-abstraction (e.g., interfaces for single implementations) to satisfy "best practices," adding needless complexity.

In this setup, a developer tackling a bug or feature must first mentally unpack these layers: "Where does the discount validation actually happen? Is it in the service, the repository, or duplicated in the controller?" This cognitive tax compounds over time—onboarding new team members takes weeks, debugging sessions drag on, and refactoring becomes risky because changes in one artifact ripple unpredictably. Studies and anecdotes from engineering teams (e.g., via DORA metrics or developer surveys) show this can lead to 2-3x slower velocity and higher burnout rates.

DDD-style rich domain models flip the script by making the domain the star of the show. Behavior lives with the data: a Discount aggregate might encapsulate validation, application, and invariants directly, using the ubiquitous language of the business (e.g., methods like applyTo(Order)). The code shape reflects real-world concepts—no need to hunt through tech layers. This reduces cognitive load by aligning the mental model with the problem domain, allowing developers to think in terms of "orders" and "customers" rather than "controllers" and "repositories."

Caveat: Even in monoliths, discipline is key. Without it, systems can devolve into "anemic domain models" (data bags with no behavior) or hyper-abstraction (over-engineered patterns applied blindly). DDD is powerful for complex, evolving domains, but it demands real collaboration with domain experts—not just rote application of patterns from a book. When misapplied, it can add its own complexity; the goal is always simplicity through domain clarity, not ceremony.


5. When Simple Changes Become Epics

In architectures lacking a unified domain model—particularly those fragmented across microservices or heavy layers—small business changes that should be straightforward often balloon into protracted, resource-intensive projects. What begins as a minor tweak, like updating a pricing rule or adding a user preference, escalates due to the system's inherent brittleness and interdependencies. This turns "epics" in the agile sense into literal sagas, sapping team velocity, inflating costs, and frustrating stakeholders who expect agility from "modern" setups.

Take a real-world example: a retail app needs to modify its loyalty program, changing the points accrual from "1 point per $1 spent" to "tiered points based on membership level." In a distributed system without centralized domain logic:

  • Multiple services touched: The change impacts several services—e.g., the user service for membership checks, the order service for calculations, and the rewards service for redemption—requiring code updates in each, often in different repos or languages.

  • Backward compatibility: To avoid breaking existing integrations, you must implement compatibility layers, like dual-processing old and new logic or API versioning, which adds temporary code bloat and testing overhead.

  • Schema migrations: Database schemas across services need updates (e.g., adding a "tier" column), involving careful migrations, data backfills, and downtime coordination to prevent corruption or inconsistencies.

  • Contract negotiations: API changes trigger discussions with consumer teams—debating field names, error codes, or response formats—often formalized in shared docs or meetings, delaying implementation.

  • Many PRs and deployments: Each service generates its own pull requests, reviews, and merges; deployments must be sequenced to avoid partial failures, multiplying CI/CD runs and rollback risks.

  • Dotted-line ownership across teams: With ownership split (e.g., one team owns users, another orders), the change requires cross-team alignment, escalating to managers or architects for arbitration, further slowing progress.

A single adjustment that intuitively feels like "a few lines of code" now spans weeks: initial scoping (1-2 days), implementation across components (1 week), testing and integration (another week), reviews and approvals (days more), and staged deployments (with potential hotfixes). The total involvement? Easily 4-6 people, plus opportunity costs from delayed features.

By contrast, a well-structured monolith with a rich domain model empowers bold refactoring. The loyalty logic lives in one place—say, a Loyalty aggregate—allowing you to update it centrally, leverage automated tests for confidence, and deploy once. What takes weeks in microservices often wraps up in hours or a day: change the model, verify with unit/integration tests, and ship. This is where the oft-cited 4x (or greater) speed improvement - as reported in DORA research and team retrospectives - materializes, enabling faster iteration and business responsiveness.

Although, to be fair, monoliths aren't immune to issues—they can "rot" into big balls of mud if not maintained with modularity (e.g., via bounded contexts or packages). The speedup is real, but it hinges on keeping the monolith healthy: regular refactoring, strong tests, and domain focus. Without that discipline, even monoliths can devolve into slow, risky behemoths. Ultimately, the epic-making nature of changes in "modern" systems underscores a core truth: architectural choices that prioritize trends over domain clarity extract a steep, ongoing toll on delivery.


6. Infrastructure Bloat: Paying the Microservices Tax

Adopting microservices often leads to an explosion in infrastructure requirements, where every service adds layers of operational complexity and cost that a monolithic architecture simply doesn't demand. This "bloat" isn't just about hardware—it's the cumulative toll on resources, tooling, and human effort needed to keep the system running reliably. In essence, you're paying a premium for distribution, even when your scale doesn't justify it, turning what could be a lean setup into a resource-hungry beast.

To see this in action, imagine a mid-sized e-commerce platform handling orders, inventory, and user accounts. In a microservices approach, you might split these into 10-20 separate services. Each one multiplies the infrastructure footprint:

  • Containers: Every service runs in its own container (or pod in Kubernetes), leading to dozens or hundreds instead of a handful for a monolith. This means more orchestration overhead and potential for idle resources.

  • Pipelines: CI/CD setups balloon—each service needs its own build, test, and deploy pipeline, often with custom configurations, increasing setup time and maintenance.

  • Load balancers: Internal and external traffic routing requires additional balancers or service meshes (e.g., Istio), adding latency and points of failure.

  • Log streams: Centralized logging (e.g., via ELK stack or Splunk) must aggregate from every service, generating massive volumes of data and requiring beefier storage/indexing.

  • Dashboards: Monitoring tools like Prometheus/Grafana need per-service dashboards, leading to a proliferation of views that teams must customize and maintain.

  • Metrics: Granular metrics collection for each service explodes data points, necessitating more powerful time-series databases and alerting systems.

  • Alarms: With more components, you set up (and tune) alarms for each—CPU spikes, error rates, latency—multiplying false positives and on-call fatigue.

  • SRE headcount: Managing this requires dedicated Site Reliability Engineers (SREs) for tasks like scaling, failover, and incident response, often 2-5x more than a monolith needs.

  • Kubernetes complexity: If you're using K8s (as many do), add in controllers, namespaces, secrets, and helm charts per service—turning ops into a full-time puzzle.

The financial hit is stark: a typical microservices cluster might run 50-200 containers where a monolith needs just 3-5, driving cloud bills 10x higher (or more) due to compute, storage, and data transfer fees. Public case studies, like those from startups migrating back to monoliths, report slashing infra costs by 70-90% after consolidation—freeing budgets for actual features.

That said, microservices aren't always a tax; they deliver wins in specific high-stakes scenarios:

  • One domain explodes with traffic: If, say, your search service handles 100x the load of everything else, you can scale it independently without over-provisioning the whole system.

  • Global latency requirements matter: For worldwide users, distributing services regionally (e.g., via edge computing) can cut response times that a single monolith data center couldn't match.

  • Resilience needs justify regional distribution: In mission-critical apps, geographic redundancy prevents total outages, making the extra infra a smart insurance policy.

But for the bulk of software out there—internal IT tools, B2B platforms, line-of-business systems, admin dashboards, CRMs, basic financial apps, logistics trackers, or essentially 90%+ of today's applications? These rarely hit the scale where distribution pays off. A small, well-structured monolith wins 100% of the time on cost and reliability: simpler hosting (e.g., a few VMs or serverless functions), unified monitoring, and ops that a single dev-ops hybrid can handle. No bloat, no tax—just efficient engineering.


7. Tool-Centric vs. Business-Centric Design

Many systems end up mirroring the structure and biases of their chosen frameworks or tools rather than the core business domain they serve. This tool-centric approach—where technical artifacts drive the design—leads to architectures that prioritize framework conventions over domain clarity, resulting in misaligned codebases that are harder to evolve and maintain. Instead of the software reflecting real-world business concepts like "orders," "customers," or "inventory adjustments," it becomes a reflection of the tool's patterns, creating unnecessary friction and long-term costs.

To illustrate, consider a logistics application for tracking shipments. In a tool-centric design dominated by something like Spring Boot or Kubernetes:

  • Spring-first: The codebase is organized around Spring's layers—controllers for every endpoint, services for orchestration, repositories for data access—leading to bloated classes where business rules (e.g., shipment routing logic) are scattered across technical boundaries rather than encapsulated in domain entities.

  • Kubernetes-first: Deployment concerns dictate boundaries, splitting the system into microservices based on containerization ease rather than domain needs, resulting in artificial silos like separate pods for "tracking" and "notification" that duplicate state and complicate integrations.

  • AWS-first: Cloud services shape the architecture, with Lambda functions or S3 buckets driving decisions—e.g., event-driven flows because "that's what EventBridge encourages," even if a simpler synchronous model would suffice for the domain's low-latency requirements.

  • Event-driven because the CTO saw a talk: Patterns like CQRS or event sourcing are adopted wholesale without domain justification, turning straightforward CRUD operations into complex pub/sub systems with queues, sagas, and compensators that add overhead without addressing real business pain points.

In this setup, engineers spend more time conforming to the tools' idioms than modeling the business, leading to code that's rigid, hard to reason about, and expensive to change. A new requirement, like adding customs clearance rules, requires navigating framework-imposed layers, updating multiple artifacts, and ensuring compatibility with tool-specific quirks—amplifying development time and bugs.

Real engineering, by contrast, starts with the domain and selects tools that support it, not dictate it. A business-centric design places the rich domain model at the core, using tools as enablers rather than leaders.

Balance Matters: Frameworks Are Tools, Not Architecture It’s easy to swing too far in either direction. Yes—most modern frameworks push developers toward accidental complexity, tech-centric designs, and ritualistic layering. And yes—a strong domain model should be the center of gravity, not the framework’s abstractions. But abandoning frameworks entirely is its own anti-pattern. Frameworks exist for a reason: they standardize the plumbing so you don’t have to. Throwing them out blindly is just NIH (Not Invented Here) syndrome wearing a principled hat.

The right balance looks like this:

  • Frameworks underneath: To handle I/O, HTTP, persistence, DI, messaging, etc. Let them do the boring, repeatable things they’re good at—e.g., use Spring for dependency injection and transaction management, but don't let it force anemic models or unnecessary services.

  • Domain model on top: Expressive, behavior-rich, business-first. The framework serves the domain, not the other way around—e.g., embed shipment validation directly in a Shipment aggregate, using framework features only as needed.

  • Tooling only when needed: Not because the tool is trendy, not because “we always use this library,” and definitely not because the team fears writing 20 lines of code themselves. Evaluate: Does Kubernetes add value here, or is a simple Docker Compose sufficient?

This balance is what separates true engineers from mere framework operators. A team that masters it uses frameworks as leverage, not as crutches or dictators. The result is a system where the domain is crystal-clear, the code remains adaptable, and the infrastructure stays as light as possible—without reinventing the wheel. Ultimately, shifting to business-centric design reduces misalignment, speeds up feature delivery, and ensures the architecture evolves with the business, not against it.


The Hard Economic Truth:

Most Companies Aren’t Rich Enough to Go “Modern.”**

Based on real-world thresholds seen across engineering teams, microservices only make economic sense when all three conditions are true:

You need microservices ONLY if you have…

  1. 500M–1B+ rows of hot, frequently accessed data, and

  2. 100K+ real concurrent users, and

  3. Extreme load skew (one part of the system needs 100×–1000× more resources).

If you do not meet all three?

Then microservices are not “modern.”

They are an extremely expensive luxury good.

Meet two out of three and you're still likely losing money. Autonomy gains (e.g., for large orgs) rarely outweigh the tax without full scale - most teams thrive with monorepo collaboration instead.

Most organizations simply cannot afford the engineering, coordination, or infrastructure overhead that microservices require.


The Solution: A Small Team of Real Engineers With Domain-Model Expertise

You do not need:

  • 30–50 developers

  • 10 DevOps engineers

  • a dozen microservices

  • Kubernetes

  • a full-time SRE team

You need 4–10 real engineers who:

  • know how to model a rich domain

  • can build a modular monolith

  • understand boundaries and invariants

  • keep accidental complexity out

  • think in terms of business concepts, not framework artifacts

This alone often leads to:

• ~75% less manpower

• ~4× faster delivery cycles

• infra costs at a fraction of microservices

• far fewer bugs

• far easier onboarding

• orders of magnitude better maintainability**

These numbers aren’t hype—they’re the simple result of removing the distributed-systems tax.


Closing Thought

This article is not anti-microservices or anti-framework.

It is anti–premature complexity.

If your software isn’t your competitive moat, keep it boring and domain-focused.

If it is—then at least be honest about the scale that truly requires microservices.

Architectural fashion has cost companies millions in wasted effort.

A rich domain model inside a well-structured monolith delivers more value, more predictably, at a fraction of the cost.

So ask yourself:

Are you actually rich enough to go “modern”?

Or are you paying for complexity you don’t need?

Top comments (0)