DEV Community

Kristian R.
Kristian R.

Posted on

Small-scale microservices in the wild (1): Anachronistic monoliths

Microservices are all over these days. So are frameworks to build different “microservices” from scratch, and so are infrastructure and runtime components to help getting real-life application environments built out of microservices. Using small components to build larger, more complex applications seems so incredibly much the de-facto standard of how to work these days that we tend to apply this pattern to each and every problem at hand. Depending upon your problem, this might or might not be a Good Thing(TM). We’ve been down the route of cutting more complex applications into smaller components for the last couple of years, and there are a couple of insights to be gained from this. I’d like to spend some time pondering these things and looking at where we are now.

In order to not be too boring to the few people who will be reading through this, as well as in order to not be trying to write too much in one batch and not getting it done after all, I’ll try making this a short series of write-ups. Hope it will somehow work out.

Solving problems

In general, technical solutions should, well, be actual solutions to certain real-world problems, no matter whether in an actual business domain or in a technical domain solving real business problems. Microservices, most of the time, seem to fall into the latter category – a technical solution to solving certain technical problems one might encounter when building non-trivial applications. Not too much a surprise, those solutions do address some problems better than others, and there might be requirements or demands where a certain solution could make things considerably worse. Microservices are no exception here, as to be seen in two scenarios to optimize an existing IT environment.

Speed up development.

In most cases, one main pain point is the notoriously limited amount of developer resources available, assuming most of us aren’t Google or Amazon and most of our teams are considerably smaller. Simple and rude: You might want a monolith. Period. This sounds pretty much anachronistic. However, splitting up your system into microservices these days usually means composing an application from several smaller services communicating by means of HTTP and JSON. This has interesting aspects from a development point of view:

  • You will spend a good deal of brain (as well as CPU) cycles on things such as (de)serialization and transfer of data. You are supposed to translate your business objects and your data structures into some language-agnostic representation that can safely be sent across some sort of wire. In a monolithic applications where all or at least the vast majority of communication happens locally, this is of little to no relevance. You live inside a shared common data model and a common application context so there’s no “outer world” involved.

  • You will programmatically have to add reliability and resilience to your application by dealing with the fact that required systems randomly could respond too slow or completely fail. Again, working in a single application context, this is of no relevance at all if the application tends to just work or fail as one.

  • You also need to consider scenarios for retransmitting certain batches of data as well as getting things such as distributed transaction handling “right” in at least a rudimentary way – whatever means “right” for your application. Handling transactions isn’t always easy across one single application (if it involves data storage and business processes that might have been started already); spanning a single transaction across multiple applications doesn’t make it much easier.

  • As soon as any interface you exposed is used by any client, it will become virtually “immutable” as (assuming loosely coupled systems) it will be hard to impossible to track down all users of that certain system, so removing “old” methods always is more risky than, in example, removing deprecated API in a large Java project which will fail to build or even deploy as soon as you remove code still in use by certain specialized parts of your system.

  • You will have more boilerplate to deal with for building various distributable modules in the infrastructure they require, configurability for various dependencies (like other services) and these things. In a monolith, you will do these things once for the whole application if you do it right.

There are for sure more than these but I see them to be the most critical ones if following such an approach at least in our environment. It might be different in yours, though.

Speed up Ops, too?

Development is all interesting and good but usually doesn’t earn us any money; in most cases, we need a running application for that. And, much alike development, operating and running applications gets more difficult if working with a distributed system, compared to a single monolith or a single application in some application server:

  • Suddenly you will have to deal about various services that need to be up and running for the whole system to flawlessly work. You will most likely have to take care of services (re)starting, maybe in the right order, too.

  • Most operational aspects, such as deployment, logging, monitoring, debugging or auditing get more complex because there are more components involved. You most likely will need considerable efforts, even conceptually, to maintain a bunch of interdepending modules where, given a monolith, you would just have to work with a single application and maybe its logfiles and monitoring hooks.

  • What’s actually an advantage in some use cases comes as a disadvantage in straightforward operations: You will have to consider scalabiltiy and performance not just for one application but for many. If you encounter bottlenecks, you will potentially have a much harder time finding which component is critical and how to scale it in order to work well.

  • Also from a system security point of view, things might become … interesting. This, obviously, includes HTTP/REST APIs exposed and available to clients – are they always appropriately secured, audited, encrypted? Or is SSL termination rather just something happening at an external gateway while local network traffic doesn’t use HTTPS at all? Are services capable of detecting (and dealing with) man-in-the-middle attacks? Are applications in any way able to validate requests and responses to figure out where they are coming from and whether they are actually reliable? Or, even more simple, how can we avoid internal services talking to each other using standard credentials stored in configuration files (and maybe worst of all in some git repository, available to the whole team)? Do we ensure “production” systems only to be able to talk to each other but not to be reachable from, say, development or testing environments?

So why bother after all?

Being confronted with all these points myself, a reasoning to defend microservices wouldn’t be all too hard: There established are solutions to most of these problems. There are orchestration and deployment infrastructures to handle vast loads of servers and services ranging from more “traditional” tools such as puppet or chef to large-scale environments such as kubernetes. There are tools such as zabbix, nagios, elastic stack and a plethora of others, in most cases open-source, freely available and just waiting for you to solve your problems with. Plus, for each of these reasons you might find at least one good reason to give up on monolithic structures in favor of a more modular microservices architecture.

That’s actually okay. I don’t really see a monolith a very desirable approach to application architecture anymore, either. But, as always: Your mileage might var. If there’s one thing to take away here, then it should be: Careful why to cut your system small. It will end up with a more distributed system, at least initially increase maintaineance effort and end up adding a load of accidental complexity to your system that might just fail. So be critical about the drawbacks and downsides of such an approach, focus on business requirements and, most of all:

Come up with good reasons why to build a modular, distributed system instead of a monolith. Come up, too, with a good idea of how “small” or “large” is a good size for a service in your environment. These reasons might vary depending upon your busines domain and company size, so I’ll soon spend some time pondering our reasons to still follow this path. Stay tuned… And feel free to add feedback and insights of yours. Way more than reporting our ideas, I'd like to see experiences others have made in this field...

(originally posted on dm.zimmer428.net)

Top comments (10)

Collapse
 
chrisjdavie profile image
chrisjdavie

Where I work at the moment loves microservices, I think there's one for every 3 devs, it's interesting to think about (puts a lot of stress on the infrastructure team).

Lots of the things we use microservices for, it seems a well-defined library would do just as well; if the contract between the different bits of code is well defined, it doesn't need to be a microservice, there are other ways of splitting up code. (That also doesn't make the deploy system complicated.) There isn't the independent scaling benefits from microservices (they're all deployed on one server and one fall-back each), everything pretty linearly dependent, so if one goes down everything stops, and there's a lot of excess code floating around configuring the endpoints on top of the business logic itself.

One benefit is microservices force the devs to think about split responsibilities and obey interfaces, but I don't think microservices as a way to enforce discipline is sensible.

(There are two distinct products, and having them as largely independent services make a lot of sense - they don't rely on the same code base.)

Where I worked before, my main project wasn't a web-service, it was a batch-processing pipeline, we needed to split our code base up but we'd never consider using microservices, was 4 libraries. Sometimes the devs broke the interfaces in not-sensible ways, but that was fixed with code review.

Collapse
 
kr428 profile image
Kristian R.

Yes, completely agree; this pretty much reflects my experience too. That's what I am pondering these days - step off the "microservices train" for a while and figure out which problem they actually solve, and whether they solve more problems than they create.

Collapse
 
bizzibody profile image
Ian bradbury

When I've been through this before one of the biggest issues I/we've faced is Politics. If you have services being provided by more than one team gaining alignment across all groups can be a nightmare - especially if the teams are from different parts of the organisations.

Collapse
 
kr428 profile image
Kristian R.

Ah. Nice. Now that's a completely different beast to tame I guess. Around here, this is not so much of an issue as there aren't that many teams and coupling between their components is pretty loose so of course there are problems but they aren't as massive as they possibly would be with all these folks working on a single code base in a monolith. ;) What are the biggest political issues you're facing there? API contracts? Deployment and uptime issues? I can imagine a few things...

Collapse
 
bizzibody profile image
Ian bradbury • Edited

I don't have any problems at the moment. It was just a comment from previous experiences.

But to add more detail....

Lets say you have a service, S1, that is provided by a division of the business D1 and used by divisions D1 and D2.

Now suppose that D2 needs a new api end point that provides an alternate format of data, or some other change to the service S1.

Suddenly D2 need to get D1 on side and persuade them to make the change they need.

D1's budgets and priorities are unlikely to be aligned to D2.

This can be the biggest problem.

Thread Thread
 
kr428 profile image
Kristian R.

Ok, this is pretty much what I thought. The only reason we didn't completely get lost within this is a mere matter of (lack of) size. Asides that, it shows rather well in my opinion that introducing changes such as DevOps, microservice architecture and the like will never really work out if they are limited to the technical staff and leave out management teams and team objectives. This seems a way more complex and difficult task to do well...

Thread Thread
 
bizzibody profile image
Ian bradbury

IMHO

There are 2 groups of people that are a must to achieving an SOA.

a] An active and respected Technical Architecture Group
b] Clear governance bodies that oversee the Services and applications

If you have both of these groups (and the technical knowhow) then SOA can be achieved in a controlled and planned manner*.

  • Even then it is bloody difficult
Thread Thread
 
kr428 profile image
Kristian R.

Yes, that seems a valid approach that could work. Or, another approach somewhere in between DevOps and "Agile":

  • Make all these teams responsible (technically, economically) for the service they offer.

  • Give them some "manager", something like a Scrum Product Owner maybe, to do the coordination of which changes need to be implemented both in responding to "customer" request (with "customers" mostly being other teams) and in responding to issues found in operating the service.

  • Make sure these structures are known well internally so people know who they have to talk to in order to get what they need.

That's the road we're at the moment trying to take, unsure how well it will work. So far we have at least seen this "not fail" with companies around, though of course it has more than just a few pitfalls. At the moment, I am still watching and observing people who try to do right this, especially having the question of technical governance in mind as I still do think this is an approach that only works for a very specific kind of application environments - or companies. Not even talking about cultural changes in a company when trying to establish something like that after having been in operation for a couple of years... ;)

Collapse
 
bizzibody profile image
Ian bradbury

I just found another post that is related to this conversation.

dev.to/sparkpost/restful-api-versi...

Collapse
 
kr428 profile image
Kristian R.

Ah, thanks for sharing. On my way to read through this, in a slight way biased up-front as I have seen a load of bad ideas (and rather bad disputes) when it comes to REST API versioning. Let's see ... ;)