DEV Community

James Eastham
James Eastham

Posted on • Originally published at jameseastham.co.uk

Adding functionality to existing Microservices

Following on from the first article about my process of architecting microservices, today I’m going to look at adding functionality.

Traditionally, in a monolith, new features can cause concern.

What if I add a feature and it breaks a seemingly un-related piece of system function?

What if I change a piece of code I shouldn’t and the whole system topples over?

I’m scared to push a deployment incorrectly and cause an outage?

All legitimate concerns, especially when every piece of your system functionality is all wrapped in one single application.

Microservices give a better way.

A Better Way to Add Features

To quickly summarise the current system functionality as we left it at the end of the last article. We currently have seven microservices that:

Handle a POST request containing email data and requisite attachments
Uploads attachments to an OCR engine (ABBYY Flexicapture Cloud)
Receives the recognized index data from ABBYY
Posts this data to a 3rd party API
As it turns out, there was one vital piece of functionality we missed in our initial spec.

Emails are received from multiple different suppliers and we need to identify who the supplier is before sending up to the OCR engine. The lookup itself is nice and simple, it is based purely on the email address the email was sent FROM. Easy… right.

Start with the messages

Messages are first-class citizens in any microservice architecture

Remembering that microservices are first-class citizens in microservice land, what new messages do we have.

Alt Text

Moving on to the activities required and also the amendments to the existing Send for OCR activity.

Alt Text

And finally, any new services that are required. The logical grouping here is nice and simple:

Alt Text

Getting to production

So we go ahead and create the new supplier service. It uses a really simple NoSQL database to store key-value pairs of email addresses and the respective supplier name that goes along with it.

We then go ahead and deploy this service into my Kubernetes cluster. Because of the completely decoupled nature of the system, we can do this without fear of system outages.

As far as the rest of the current system is concerned, nothing at all has changed. The system continues to function as normal.

Once we are sure the supplier store is up and running, we can then make changes to my OCR service to build in the supplier lookup.

After adding the code to hook into the supplier store (for the moment, assume this is a simple RESTful GET request), pushing out the changes to the OCR service is slightly more challenging. It is a production flow, and an interruption to service is bad.

But alas, microservices have the answer. And it comes from the dark and morbid world of coal mining.

Canary Releases

Early coal mines did not feature any kind of ventilation system, so miners used to keep a canary with them when down in the mines. Canaries are extremely sensitive to methane and carbon monoxide, so a dead canary = evacuate the mine.

A short but extremely meaningful life for the canaries of the 20th century.

Taking that same principle, we will deploy our new OCR service into production but only allowing it to take a small percentage of the production traffic (I normally start with 1%).

Once we know the system is functioning as normal, we can then scale up the percentage of traffic the new service is taking until it is at 100%. If at any point the canary ‘dies’, we can quickly revert back with minimal loss of functionality.

Isn’t that much better than the monolithic alternative?

Extending the features

After deploying the new OCR service, everything works fine and we soon have supplier lookups running seamlessly in production.

However, we soon realize that the speed of doing a lookup over HTTP is causing a slow down in the system that wasn’t initially planned for.

So what can we do?

Initially, you might think to give the OCR service direct access to the supplier database. Whilst that is a perfectly valid way to do things, it breaks one of the core rules of microservices.

Each microservice should have its own database that only it can access.

Coupling two separate services to the same database start to build up a distributed monolith, and that really is the worst of both worlds.

Keeping in mind that each microservice should have its own database we decided to implement a simple cache within each OCR service.

So what does that look like?

When the OCR service starts up, it publishes a request to the event bus asking for as much supplier information as is currently available.

The supplier-store responds with a list of all current suppliers which the OCR service proceeds to store in a Redis cache instance. The supplier-store also publishes an event every time a new supplier is added, which the OCR service is subscribed to.

The supplier lookup code in the OCR service is amended to first check it’s internal cache, before making the HTTP request to the supplier-store directly.

The process of keeping two separate data stores in sync, but not in real-time, is called eventual consistency.

The effect on our messages

This adds a couple of extra messages to our message table

Alt Text

It also changes our supplier created activity:

Alt Text

So, how have microservices helped in this situation?

Well, across the seven services running in production we added a completely new system function by only touching one of them.

That means we can be extremely confident the rest of the system will continue to function as the new functionality is deployed. It also means that if there is a problem, we have a much smaller set of code to debug.

Everybody wins in the world of microservice deployment, apart from the canaries of course. They always seem to be getting a rough time.

Alt Text

Top comments (0)