DEV Community

Digvijay Katoch
Digvijay Katoch

Posted on

The Distributed Monolith Is Not a Microservices Problem. It's a Transaction Boundary Problem.

You've broken your monolith into six services. They deploy independently. They have separate repos.

And yet — one "simple" business operation touches four of them synchronously, shares a database, and when service C fails, services A and B are left in an inconsistent state.

Congratulations. You have a distributed monolith. It has all the operational complexity of microservices and all the coupling of a monolith. It is strictly worse than either.

I've been building enterprise Java systems since 2009. This pattern has killed more modernization projects than any technical decision about frameworks or cloud providers. Here's what it actually is and how to fix it.


What Makes Something a Distributed Monolith

The tell is in the transaction semantics, not the deployment topology.

Symptom 1: Shared database, multiple services.

OrderService     → writes to orders table
InventoryService → reads from orders table directly
PaymentService   → joins orders + inventory in a single query
Enter fullscreen mode Exit fullscreen mode

These are not three services. This is one service with three deployment units and a single point of failure at the data layer.

Symptom 2: Synchronous chain calls in a single business operation.

POST /checkout
  → OrderService.createOrder()
    → InventoryService.reserve()       // sync HTTP
      → PaymentService.charge()        // sync HTTP
        → NotificationService.send()   // sync HTTP
Enter fullscreen mode Exit fullscreen mode

The latency of this operation is the sum of four network calls. The failure rate is approximately 1 - (0.99)^4 ≈ 3.9% if each service is at 99% availability. You have not improved availability by distributing. You have degraded it.

Symptom 3: Rollback logic scattered across services.

// In OrderService
try {
    inventoryClient.reserve(orderId);
    paymentClient.charge(orderId);
} catch (Exception e) {
    // Now what? Payment went through, inventory didn't?
    inventoryClient.release(orderId); // This can also fail.
    log.error("Partial failure. Data is now inconsistent.");
}
Enter fullscreen mode Exit fullscreen mode

This is the defining failure mode. You've lost the atomicity guarantee of a single transaction without replacing it with anything.


The Fix: Transaction Boundaries Are the Design

You don't fix a distributed monolith by writing better retry logic. You fix it by deciding where your consistency boundaries actually are and designing around them.

Step 1: Define your aggregates honestly.

An aggregate is the unit of transactional consistency. If Order, OrderLineItem, and ReservedInventory must always be consistent with each other, they are one aggregate — regardless of how many services touch them.

Don't let team ownership boundaries dictate aggregate boundaries. That's the org-chart-driven architecture trap.

Step 2: Replace synchronous chains with the Transactional Outbox pattern.

Instead of calling downstream services synchronously, write to an outbox table in the same local transaction as your primary state change.

@Transactional
public Order createOrder(OrderRequest request) {
    Order order = orderRepository.save(new Order(request));

    // Same transaction, same DB, same commit
    OutboxEvent event = new OutboxEvent(
        "ORDER_CREATED",
        order.getId(),
        serialize(order)
    );
    outboxRepository.save(event);

    return order; // Committed atomically. No partial state.
}
Enter fullscreen mode Exit fullscreen mode

A separate poller (or CDC via Debezium) reads the outbox and publishes to your message broker.

@Scheduled(fixedDelay = 1000)
public void pollAndPublish() {
    List<OutboxEvent> pending = outboxRepository.findUnpublished();
    for (OutboxEvent event : pending) {
        try {
            messageBroker.publish(event);
            outboxRepository.markPublished(event.getId());
        } catch (Exception e) {
            log.warn("Publish failed for event {}, will retry", event.getId());
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Make downstream handlers idempotent.

Since events will be delivered at least once, your consumers must handle duplicates.

@EventHandler
public void onOrderCreated(OrderCreatedEvent event) {
    if (inventoryRepository.isAlreadyReserved(event.getOrderId())) {
        return; // Idempotent. No harm in receiving twice.
    }
    inventoryRepository.reserve(event.getOrderId(), event.getItems());
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Use a Saga for multi-step compensating transactions.

OrderCreated event →
  InventoryService: ReserveStock →
    StockReserved event →
      PaymentService: ChargePayment →
        PaymentCharged event →
          ShipmentService: CreateShipment

// Compensation on failure:
ShipmentFailed event →
  PaymentService: RefundPayment →
    PaymentRefunded event →
      InventoryService: ReleaseStock
Enter fullscreen mode Exit fullscreen mode

Each service listens for events, acts, and emits its result. No central coordinator. No distributed transaction.


What This Doesn't Fix

  • It doesn't fix teams with unclear ownership of data domains. Organizational problems don't have technical solutions.
  • It doesn't fix query patterns that require joining data across aggregate boundaries in real time. For that, you need read models (CQRS projections).
  • It doesn't make your system simple. A well-designed distributed system is genuinely more complex to operate than a well-designed monolith. Choose distribution because you need the scale or team autonomy — not because microservices are fashionable.

The distributed monolith is what happens when you adopt the deployment model of microservices without adopting the data ownership model. Fix the boundaries first. The deployment follows.

Top comments (0)