DEV Community

garook
garook

Posted on • Edited on

Microservices - Advantages and Challenges

Microservices are the new in-thing. It is a buzzword. Many platforms have begun shifting towards microservices. This article will go through the benefits in which microservices will help improve services dramatically.

Why microservices? The basic understanding of microservices is division of a system into many individual components that still work together, hence the word: micro (small) services. A large-scale project is often unwieldy and deployments are often done at a slower pace as compared to a system using microservices.

With the use of cloud, microservices can be deployed and scaled to size according to the users, unlike a monolith system in which scaling is applicable to all parts of the system, even for components that are not as frequently used. In converse, if a part of a monolith system is suffering from a heavily load, it will impact the delivery of the other services in the system. In a microservice environment, this will not happen as the services work autonomously.

Say for example, you have a monolith system that deals with loans. You have a website in which users can apply for loans. You have the backend systems dealing with onboarding, disbursement of funds and reconciliation of payment.
So, in broad categories you can divide the systems into the following

  1. Website
  2. Onboarding
  3. Disbursement of funds
  4. Reconciliation of payment.

This is part of Domain Driven Design, where features are grouped together by their domain and defined as a service.
Lets explore the advantages and the challenges of implementing microservices with the sample architecture above.

Simplicity in Design
By splitting from a monolith architecture, you are turning a complex service into multiple small simpler services. This brings about the following advantages

1. Improvements in Productivity
By having smaller services, deployment scale would be smaller in general. The development cycle would be smaller and easier to manage. Changes can be made quicker without worrying about the entire ecosystem as a whole. Changes in a monolith system would require you to note changes done as a whole to the entire ecosystem, instead of just focusing on the feature itself.

Reconciliation for example can be updated and deployed without waiting for Disbursement code to be ready. This allows the engineers working on Reconciliation to mark this task as complete and move on to other tasks

2. Flexibility in deploying technologies

In a monolith system, there are certain constraints laid out by the system. This may come from the operating system, the application server or even the programming languages used. By using microarchitecture, reconciliation can be coded in #C and the disbursement can be coded in Java and still be expected to work together. You no longer are bound to a specific technology.

New technology can be quickly added and removed from the ecosystem in a modular fashion without worrying about one particular change might cause catastrophic failure amongst all services.

I can add in an Elasticsearch component to the onboarding services should I wish to, and it would not impact the other services so long as they still are able to get the information they require from Onboarding.

3. Separate Management of Services

The server instances operate independently from one another. This allows a dynamic allocation of physical resource you need to assign to any particular instance. If there are no clients, then reconciliation would not require much resources. However, once there are significant number of clients, reconciliation would require more resources depending on the volume of active loans.

In a monolith system you generally are not able to focus on one segment of the system; any memory resource assigned to the system is left up to the application server to allocate. This means that an additional of perhaps 16gb worth of RAM is left up to the application server to distribute amongst its services; it may not be fully allocated to the component that most need of it.

In a microservices architecture, you are able to assign more resources to a specific service and ensure that it is able to fully utilize the resources. This is made easier by cloud platforms which are able to scale according to needs.

Now for the challenges of such a design

1. Server maintenance and upkeeps
There are now 4 different server instances that need to be managed and maintained accordingly. This means, you must provision for 4 server instances. You would need 4 different application servers and 4 database instances. You would need to ensure that applications and security is taken care in all 4 instances.

Solution:
To overcome this challenge, Dockers and virtual images can be utilized. A server instance is created, configured and hardened accordingly.

When the service is being deployed, it pulls the most updated virtual image and use that image to deploy the code, thus allowing security practitioner to focus on strengthening the virtual image that can be quickly populated across the ecosystem.

2. Integration and consistency/concurrent problems
While the system is separate, the data that feeds into one component may need to be populated and processed by another part of the system. The 4 different services all need to know the loan details, such as loan volume, client details and dates of repayment.

In a monolith system, there is one central database in which you are assured you have the latest updated copy of the data at any given time. However, this becomes one bottleneck of any deployment as any database changes would affect the system as a whole and the other components would need to account for this change.

In microservices there are 4 different databases, and each can operate independently of each other. This brings a problem of concurrency. What is the latest copy of the information of the client in such a system?

Solution:
To deal with such problems, the services will need to communicate with each other to update the data within.

RESTful API calls:
Services can be built with exp osed API that other services can call to send and process data accordingly. This will allow service A to update the data in service B via the API calls. However, such calls are typically manually triggered. For autonomous calls, with a guarantee of processing, message queues should be used instead

Message Queues:
Data is queued and processed accordingly at regular intervals. Any failures should be rerouted to a failure queue to be processed either manually or investigated for reason of failure.

** 3. Authentication / Authorization**
The need for communication raises another challenge. The four services will need to work with each other, but they also need to be independently secured. In a monolith system, one login will allow you to be able to access multiple features as required as they are all in one place, held behind one gate.

In the given scenario, for users who can access the frontend and login successfully would expect to be able to access the information he requires from the different services, but how will the services know that the user has been successfully authenticated ?

There are also scenarios where you would want to expose services to the public or to external vendors which make this tricky to manager

Solution:
To resolve this, there is usually an authentication server to ensure that the user is only able to ensure that the user is successfully authenticated. The services will all check against this to ensure that the user is the correct user and what sort of data is he allowed to view or to alter.

Biggest Push for Microservices
I believe the biggest push for microservices is the process of work.

In a development cycle, there is usually planning, analysis requirements, implementation, testing, deployment and maintenance. As services continue to be provided by the system, unexpected scenarios might occur, such as exponential growth (LinkedIn, Facebook all faced such a challenge at a certain point)

To be on the cutting edge, speed is the key, but you cannot compromise on the quality of work. If you do so, you end up incurring technical debts which must be paid at some point. You will pay for this debt one way or another; there is no way of avoiding it unless the system is dropped at some point.

As speed is the name of the game, so to speak, deployments must be faster and faster. In every organization you need to identify what are the bottlenecks in the deployment processes to refine them. Most people are under the misconception that adding more people , pushing for overtime will get them the product or features faster, but if you do not identify the constraints of how a product flows through the workstation to the customer, you will never see significant growth in the system.

You can have 10 different developers all working on different features but if they need to wait for each other to complete all features to be pushed for a deployment, then there lies the constraint. The deployment is the constraint as the developers’ work is not considered completed until they have deployed and tested.

Microservices allow exactly that, by splitting a big service into smaller services, the 10 developers can work onto different services. It is unlikely a feature would span over different services but even if they do, the deployment size is a lot smaller as compared to one big deployment in one big service.

Tom can work on Onboarding services and deploy it successfully just as Jerry can work on Reconciliation and deploy it successfully without waiting for Onboarding if whatever Tom is working on does not impact Jerry. By doing so, you deliver new features faster.

Tools needed to ensure that Microservices succeeds

1. Test-driven Development
To improve the speed of deployments, it is critical to ensure that checks are in place. This is true of any system, be it monolith or microservices. By pushing for a test-driven development, instead of writing a feature first, the developer must design the tests for a feature before working on the feature itself.
The advantages of having every feature having unit testing is that when developers deploy a change, issues and bugs that might escape until the manual testing can occur is flagged out as a failed test and the developers will know where they have to fix and rectify it immediately without waiting for the QA team to tell them, or worse, to be flagged as a critical error in a production environment.

However, this will cause each feature to take a longer time to developer, as the developer must create not only the feature, but the complete test that surrounds it. It may sound counterproductive as we are speaking on faster deployment, but with this checks in place, any new feature that gets added can do so with less worry that it may break something within the system and escape into production.
This is especially painful if the system in place does not already have such a feature. It will be very painful to implement, but ultimately it will be worth it.

2. Docker / Virtual Machines
By having virtual images will help greatly in the deployment process. While it is sufficient to already have fixed servers and doing manual deployments, having virtual machines allow the deployments to begin as a fresh state.
If the server is old and has not been replaced, there may be old configuration or files that may impede new deployments. For instance, if you have an old version of a library installed in the server and you do a deployment with an updated library, there may be possible conflicts or unforeseen circumstances.
Thus, having virtual machines to deal with re-deployment will be very useful.

Closing:

Although microservices architecture promises much, it is not a sliver bullet. There must be support and good practice in place to facilitate this choice, otherwise, you do not reap the full speed of microservices deployments.
For old systems, you can only expect that the changes will be painful and slow, but once the full potential of microservices architecture has been implemented, it will pay off well in terms for future feature deliveries.

Top comments (0)