You split the monolith. You have separate repositories, separate deployment pipelines, and separate teams. Everything looks like microservices on the architecture diagram.
But something feels wrong.
Deployments still require coordination across multiple teams. A bug in one service cascades into failures across the system. Your “independent” services move in lockstep, and nobody can explain exactly why.
Here’s the uncomfortable truth: you might have made a monolith worse. You’ve distributed the complexity without distributing the independence. You’ve kept all the coupling and added network latency.
I’ve seen this pattern at three different companies now. Every time, the symptoms were identical.
Sign One: Your Deployments Require a Group Chat
The clearest sign of a distributed monolith is deployment coordination.
If you cannot deploy Service A without also deploying Service B, those services are not independent. They’re a monolith that happens to communicate over HTTP rather than via function calls.
I watched a team spend six months “breaking up” their monolith into twelve services. They were proud of the architecture diagram. Clean boxes, clean arrows. But every Thursday, their deploy channel looked like a military operation. “I’m deploying User Service, hold off on Orders.” “Wait, I need to push Notifications first, or your changes will break.” “Can everyone sync up at 3 pm for the coordinated release?”
They had replaced compile-time dependencies with runtime dependencies. The coupling was still there. They’d just made it invisible to the compiler and visible only in production.
The test is simple. Can any team deploy its service at any time without coordinating with anyone else? If the answer involves “well, usually, but…” then you have a distributed monolith.
True service independence means you can deploy whenever you want. The interfaces between services are stable contracts. Changes are backward compatible. Nobody needs a calendar invite.
Sign Two: One Database to Rule Them All
This one is subtle because it comes in degrees.
The obvious case: multiple services reading and writing to the same tables. I don’t see this as often anymore because everyone knows it’s wrong. But the less obvious cases are everywhere.
Services that share a database schema, even if they “own” different tables. Services that join across each other’s data. Services with foreign key constraints that span ownership boundaries. Services that read from replicas of another service’s data.
At a previous company (I think this was 2020), we had what appeared to be clean service boundaries. Each service had its own set of tables. But the analytics service needed to join user data with transaction data with inventory data. So we gave it read access to everything.
That analytics service became the spider at the center of the web. We couldn’t change the user schema without coordinating with analytics. We couldn’t change the transaction schema without coordinating with analytics. The service that was supposed to just “read some data” had become an implicit contract with every other service.
The database was our distributed monolith’s shared memory.
When services truly own their data, they own it completely. Other services get access through APIs, not through database connections. Changes to the internal schema are invisible to the outside world. This adds latency. This adds complexity in some ways. But it’s the only way to achieve real independence.
(Though I should mention: sometimes a shared database is the right answer. If your services are small, your team is small, and you’re not planning to scale them independently, the ceremony of full isolation might not be worth it. The problem isn’t shared databases. The problem is pretending you have independence when you don’t.)
Sign Three: Everything Calls Everything
Draw a graph of which services call which other services. If it looks like a dense mesh rather than a layered hierarchy, you probably have a distributed monolith.
The tell is in the transitive dependencies. Service A calls B and C. Service B calls D and E. Service C also calls D. Service D calls F and G. And somewhere in there, G calls back to A for some reason.
When you deploy a change to Service D, you’re implicitly changing the behavior of A, B, and C. When D has a latency spike, A, B, and C all slow down. When D goes down, the failure cascades through the entire graph.
I’ve been guilty of this myself. It’s easy to add one more HTTP call. “We just need to check the user’s permissions.” “We just need to look up the product details.” Each call makes sense in isolation. But the compound effect is a system where everything depends on everything.
The distributed monolith emerges gradually, one convenient API call at a time.
In a well-designed system, dependencies flow in one direction. Higher-level services depend on lower-level services. The graph has clear layers. And crucially, no service needs to know about the internal structure of services it calls.
When you find yourself adding a call from a “lower” service to a “higher” one, stop. That’s the call that closes the loop and creates the distributed monolith.
What’s Actually Going Wrong
These three signs, deployment coupling, data coupling, and call coupling, are symptoms of the same root cause. The services don’t have clean boundaries.
A true service boundary is defined by:
Independent deployment: Changes inside the boundary don’t require changes outside it.
Independent data: The service owns its data completely and exposes it only through interfaces.
Independent failure: The service can fail without cascading failures to its callers (and can handle failures of services it depends on).
When you break up a monolith, the tempting approach is to draw boxes around code that “seems related.” User code goes here. Order code goes here. But that’s not what defines a good boundary.
Good boundaries are defined by what can change independently. If two pieces of code always change together, they belong in the same service, regardless of what they’re “about.”
I’m still not sure where to draw these lines. Every system I’ve worked on has had at least a few boundaries in the wrong place. The skill is recognizing when the coupling has gotten bad enough to justify paying to fix it.
The Uncomfortable Question
If your microservices are actually a distributed monolith, what do you do?
Sometimes the answer is: merge them back together. I know this feels like failure. You spent months (or years) breaking things apart. Admitting it didn’t work is hard.
But a well-structured monolith is better than a distributed monolith. At least with a monolith, the compiler catches your mistakes. You can refactor with confidence. Latency is in nanoseconds instead of milliseconds. And you don’t need to debug distributed transactions.
The microservices architecture is valuable when you need independent deployment, scaling, and technology choices. If you don’t need those things, the complexity isn’t paying for itself.
Look at your deploy channel. Look at your database connections. Look at your service graph. Be honest about what you see.
Sometimes the best architecture decision is admitting the last architecture decision was wrong.

Top comments (0)