Discussion on: The Dark Side of Microservices

lietux profile image
Janne "Lietu" Enberg

It seems you fail to make a clear and valid point in the complaints about microservices being "difficult".


This has nothing to do with microservices. If you have a web server, it likely accepts more than 1 connection at the same time, now you have to deal with the race condition problems. Doesn't matter AT ALL if you're building a monolithic or a microservice architecture. Use distributed locks (e.g. etcd), trustworthy update operations (count += 1 vs count = old_count + 1), and make it clear when the response value is not guaranteed to be exact.

Your claim of "disallowing shared state" is also just plain wrong. There is often a database, etc. for sharing state - but what is disallowed is poking from one service to another service's state by bypassing the API.

Partial Failure

Again, complete misrepresentation. If there is a problem ... the entire monolith crashes – every failure is a total failure, flat out false. There are often bugs that cause partial failures in monolith applications. Some part of the API might work, while others do not. If it does multiple operations during one API call, some might succeed (e.g. taking money away from a bank account) while others do not (depositing the money to another bank account), leading to the exact same problems having to be solved in case of a monolithic application as well.

The benefit of microservices here is that if your search API or other such less important bits and pieces crash and burn, the critical parts of your system might still work fine.


You can typically run a monolith directly on your laptop - whoah, if you do that there's much bigger issues that you have than monolith vs. microservice. Learn to use controlled development environments, Vagrant, Docker, etc. will help you. If it takes more than a couple of clear commands from your README.md akin to vagrant up, you're doing it wrong.

Claims about them being "CPU intensive" are just random FUD as well. Get a laptop from this century, which has decent amounts of RAM, a modern CPU, and most importantly an SSD, and you won't have any issues. If your software has "hundreds of microservices", then if it was built as a monolithic application instead, you also couldn't run it, and it would be incomprehensible to everyone. With microservices you'd at least allow for people to build simple mocks and other such things for the parts that are less important to run when developing the pieces they are working on.

If you don't know which service you're having a problem with before playing around with a debugger, you also have bigger issues to worry about.

On logging, jesus, With a monolith, you can store logs in a file and grab them when needed., just no. You can't. Have you ever heard of e.g. hardware failure? Regardless of how you deploy your application, if you are sane, you will set up the same logging processes to collect the logs from various sources. Also when building "monolithic" you will run multiple instances of the API to handle high availability, etc. needs, and now you already don't have a single log to "grab when needed".

When building microservices on Kubernetes though, guess what, no need for complicated machinery to fetch your logs from multiple machines - you log to stdout and stderr and set up Kubernetes logging and voilá all your logging is taken care of. This is typically automatically handled by hosted Kubernetes instances in e.g. Azure.

On Deployment tools yes - you shouldn't use bad tools like Chef and Puppet in general, if you need tools like that at least get up to date and use Ansible/Salt Stack. Docker + Kubernetes are going to make your deployments easier and faster though when you spend the little effort it takes to get to know them, so I wouldn't say it's a bad thing in any way. Dockerfile with e.g. a few shell scripts is much easier to follow than a big bunch of ruby scripts managed by a complicated configuration management tool, and you're much better able to control the desired state.

Networking again is such a random weird argument. Kubernetes takes care of all of the internal load balancing etc. out of the box, easily.

Microservices also keep every piece you're working with a lot simpler, so it's easier to develop on it. Monoliths tend to end up with 20,000 line files where people have no idea what code is dead code, what is important, and what is required by what. With microservices it's much easier to keep your code under control. Something getting too complicated? Think about separating it to another service.

In short: weird invalid complaints made just with the goal to try and sell your product.

ethanjjackson profile image
Ethan J. Jackson Author

Hey thanks for the feedback! Glad this post has engendered some debate =)

Couple of specific points:

  • Yep of course you can use etcd for distributed locks. Etcd is a Raft implementation which does distributed consensus. The problem doesn't go away, it's just moved.

  • Good point on the issue of monoliths and partial failure. Even monolithic applications will typically have multiple components (databases, load balancers etc). I still think the problem is more pronounced in the case of microservices ...

And my response to nearly all of your criticisms can be summed up as this: yes of course, nearly all of the problems that exist in microservices, exist in monoliths as well. The issue is simply that microservices make a lot of those problems worse. I.E. yes you need log management for a monolith, but it's absolutely crucial once you've got 40 microservices running.

All of that said, I really do genuinely appreciate the criticism. I'm just some guy on the internet, and these are just my opinions – we all learn from the back and forth.