loading...
Semaphore

Getting Into Microservices? Don't Build a Spaghetti Monster.

markoa profile image Marko Anastasov Updated on ・2 min read

The most common reason why a distributed system is unable to sustain iterative releases and turns into a horror story is tight coupling between components.

When building (micro)services, the key decisions are about defining their boundaries and how they communicate.

Changing one service shouldn't require changing another. If one service goes down, other services or, worse, the system as a whole should not go down. Services with well-defined boundaries allow us to to change a behavior in one place, and release that change as quickly as possible.

We don't want to end up with a system where we have to make changes in many different places in order to make a change. This process is slow and prevents clear code ownership. Deploying more than one service at a time is risky.

A loosely coupled service contains related behavior in one place and knows as little as possible about the rest of the system with which it collaborates.

A loosely coupled system is conservative in the design of communication between services. Services usually communicate by making asynchronous remote procedure calls (RPC), use a small number of endpoints and assume that failure will happen. There is no shared database, and all changes to databases are run iteratively as part of the CI/CD pipeline.

Metrics and monitoring are also an important part of the feedback loop that enables iterative development. Having metrics that can detect issues in real-time gives us confidence to make changes knowing that we can quickly recover from any error.

This is an excerpt from my book on CI/CD for cloud native applications:

Discussion

pic
Editor guide
Collapse
jdforsythe profile image
Jeremy Forsythe

Can you elaborate on running migrations in the CI/CD pipeline? Do you have migrations as a part of the microservice owning the database and run them on successful build? What happens on failure? Obviously the app has to be forward compatible to handle a migration running before a deploy, so how do you handle breaking changes to the schema?

We currently run migrations on deploy with a strict set of rules for forward and backward compatibility and require two migrations for breaking changes (deprecation and then removal) which works okay and ensures consistency, but I'd like to hear how you do this and see if it is any simpler.

Collapse
markoa profile image
Marko Anastasov Author

Thanks for the question. I haven’t seen a simpler way, and the approach works for both monoliths and microservices.

  1. Write and deploy code that works with both old and new schema.
  2. Write and deploy migrations.
  3. Remove code that works with old version.