Image by Frycenty, shared via Attribution-ShareAlike 4.0 International.
If you’re migrating to a microservice architecture, it’s possible you’ll run into this issue: your services are all going down at the same time, which you were promised wouldn’t happen if you moved away from a monolith model. Why is this happening? Where are the benefits of the microservice model that you were promised? It’s worth examining whether your microservices are actually as siloed as you think that they are.
First, some background. The key difference between a monolith and microservices is that the monolith model creates a cohesive single programming ecosystem that has everything it it that you need to run your whole setup. So if you’re designing a web store, your product search function is dependent on your inventory system, which is dependent on your pricing and billing system. The benefit of the monolith model, of course, is that it’s simpler to build and test. You just run the program, and see if everything works. However, if you change one thing in that chain, it affects everything else. DevOps aims to get away from the monolith model because it means that in order to launch, your product needs to be flawless, or as close to flawless as it’s ever going to get. In a monolith, updates are a pain because of interdependency--you have to redeploy the whole application with every update and then see if it works, rather than constantly iterating on specific services. And if you want to adopt a new technology or framework, you have to do it across the whole application: effectively a whole new version, when really you just wanted to update one function.
A microservice model aims to decouple a product’s functions from one another so that they can run independently and be managed separately. You could, in this model, take your billing software offline for an hour or two without disrupting your whole operation, or even have a version of your billing software running on some of your servers while you test out the new billing program on others. This is what allows CI/CD to happen.
But you might notice that if implemented incorrectly, your microservice model has all of the redundancy of microservices and all of the interdependency of a monolith--meaning that effectively you just spent a bunch of extra time developing separate services that still crash when everything else crashes. What gives?
The answer may be in your coupling. A loosely coupled chain of microservices is not interdependent, and should “know” as little about the other services in the chain as possible--i.e., your billing service should not be dependent on anything happening in your search function to work, although it may need to receive input from your purchasing service. As Yusuf Dagtekin points out, “When microservices are not split in the right way, this creates tightly coupled microservices that will have all the disadvantages of a monolith and all the complexities of microservices, aka a distributed monolith.” He points out that a frequent source of tight coupling is any requirement for synchronous communication between services--that, effectively, makes everything you do dependent on the slowest service in the chain.
There is a case to be made for specific instances of a “minilith” as David Dawson calls it--tightly coupling some services that function as a unit. However, he points out that this makes them “effectively version locked”, reducing some of the benefits of using the microservice model. This might be worth it for a few services within a larger architecture, but if you’re tightly coupling everything, you might as well make it a monolith.
Top comments (0)