DEV Community

Nicolas El Khoury
Nicolas El Khoury

Posted on

Proposed Infrastructure Setup on AWS for a Microservices Architecture (4)

Chapter 4: Deployment Strategies for Microservices.

Chapter 3 promotes one way of deploying microservices, along with some best practices, in order to achieve security, scalability, and availability. In fact, an improper deployment of microservices may lead to numerous problems, namely, bottlenecks, single point of failure, increased downtimes, and many more.

Now that the best practices and considerations have been discussion, what follows describes some of the different technologies that can be employed to manage and orchestrate Microservices.

Microservices come with numerous advantages. One of the rather important ones is the isolation (and independence) each Microservice provides from the rest of the system. Therefore, assume an application composed of three Microservices: Catalog service, Customer Service, and Payment Service. If well architected, the failure of one Microservice must only impact its own part of the system, and not the system as a whole. For example, if the Payment service fails, the system payments will fail. However, the users should still be able to use the other functionalities of the system, provided by the other two systems. Another advantage of Microservices is its scalability. Assume, in the same example above, the Catalog service is receiving traffic much more than the Payment service. In this case, it would make more sense to have more replicas of the Catalog service, than that of the Payment service.

To ensure the aforementioned scalability, and availability, proper orchestration tools must be employed. Before digging deeper in orchestration tools, below is a list of deployment modes to run microservices:

  1. Physical Servers: Installing and configuring servers is one way of deploying and running Microservices. However, with today's technologies, and the incredible amount of resources offered by the machine, in addition to the wide adoption of cloud based solutions, managing your own phyiscal servers is not quite the best idea. In addition to the lack of scalability options, and misuse of the resources (be it under-utilization or over-utilization), managing physical servers on-premises comes with great Capital and Operational Expenditures. Moreover, each Microservice must run in its own isolated environment. Running multiple Microservices on the same physical server may hinder this isolation. Running each Microservice on an independent server, on the other hand is not an optimal solution.

  2. Virtual Machines: Dividing a physical machine into multiple virtual machines is definitely a better approach. In fact, each virtual machine spun on a physical server acts as an independent, isolated environment, allowing to host multiple Microservices on the same machine, and therefore, better resource consumption. However, Virtual Machines come with their own disadvantages. In fact, each virtual runs its own full copy of an Operating System, in addition to a copy of the underlying hardware. Evidently, this consumes an excessive amount of RAM and CPU. Virtual Machines, despite being a better solutions than running actual servers, they are still not quite suitable for hosting Microservices. Examples of Virtual Machine providers include, but are not limited to: VirtualBox, Hyper-V, and KVM.

  3. Containers: Similar to Virtual Machines, containers provide isolated environments that can run on a single host. However, containers share the physical server, and the host's operating system. Indeed, each container running on a machine shares the Operating System, its binaries, and its libraries. Therefore, containers do not have to own a copy of the operating system, thus heavily reducing the use of the host's CPU and RAM. Evidently, containers are lightweight isolated environments, that allow faster deployment, and the accommodation of a larger number of Microservices on the same host than Virtual Machines. Linux Containers, and Docker are examples of Container technologies.

  4. Serverless: As the name states, Serverless is an approach that abstracts all kinds of server management for the users. Definitely, there exists servers on top of which the application runs. However, these servers are managed by the Cloud Providers (e.g., Amazon Web Services, Amazon Web Services, etc). Moreover, the functions hosted on the serverless technologies are only charged for the times they are being used. As opposed to the aforementioned three technologies, when the application is not being used (There exists no traffic), the application is not considered running. Serverless brings several advantages, namely, high scalability, reduces charges, and no servers to manage and maintain. Unfortunately, the Serverless technology comes with numerous disadvantages. In fact, since the functions are not running when idle, latencies may occur when first triggering a function. More importantly, each cloud provider provides its own set of libraries for writing applications using this technologies. Therefore, when changing cloud providers, or even technologies, one is at risk of performing major code modifications.

As a summary, this article discussed the several deployment modes for software application built using the Microservices approach. Evidently, the container and serverless technologies are newer and more suitable for Microservices than Virtual Machines and Physical Servers. The next chapter will discuss how Serverless and Containers can compliment each other, in addition to the benefits of incorporating them together.

List of articles

Discussion (0)