DEV Community

Supratip Banerjee for AWS Community Builders

Posted on • Updated on

Deployment Architecture and Cloud

The biggest change we can expect today and, in the future, is the pace of the deployment. Waterfall model, in which release cycles used to last months and years, is now rare. And Product teams deploying releases to production earlier (and more often).

Today, with Service Oriented Architecture and Microservices, code base is a collection of loosely coupled services. Hence the developers write and deploy changes in different parts of the code base simultaneously and frequently.

Alt Text

The business advantages of shorter deployment cycles are clear:
• Time-to-market is reduced
• Customers receive product value in less time
• Customer feedback also flows faster back into the product team, enabling the team to track features and fix issues quickly
• Overall developer morale is rising

Alt Text

This move, however, also creates new challenges for the DevOps team. With more frequent releases, there is a risk that the code deployed will adversely affect the functionality of the website or the customer experience. That’s why it’s important to develop code delivery strategies that minimize risk to both the product and customers.

To meet these challenges, application and DevOps team must devise and adopt a deployment strategy suitable for their use case. I will talk about 2 such deployment strategies here.

Blue Green Deployment

In Blue Green deployment two identical servers or production environments are maintained, named Green and Blue.

At one time only one environment is live. For this example, Blue is the one which is currently running with the latest production code, receiving all user traffic. Green is currently idle but has the exact same system configurations.

Alt Text

Now the development team will deploy new version of the application to Green. Internally the testing will begin, and upon successful result, entire application traffic will be routed to Green from Blue. Now Green is the new production.

Alt Text

Below are the benefits that I find useful:

• Power of Rollback (reduced risks) — If there’s an issue after Green becomes live, we can quickly switch back to Blue and work on fixing the problem. Hence the production is still live and up with the older version.
• No Downtime — As one of the two parallel environments are up always there’s no question of downtime
• Less bugs — As team will test new feature in a production environment there’s extremely less chances of bugs

Azure

Azure App Service provides a wonderful feature to swap two deployment slots of an app. When we deploy our web app, web app on Linux, mobile back end, or API app to Azure App Service, we can use a separate deployment slot (Green environment) instead of the default production slot.

Deployment slots are live apps with their own host names. App content and configurations elements can be swapped between two deployment slots, including the production slot. Upon validating the changes in staging deployment slot, swapping with live production slot can happen. This eliminates downtime. And if the changes swapped into production, aren’t as we expect, we can instantly swap back to old version (rollback).

AWS

This deployment pattern can be achieved where latest version will be installed on new set of instances with help of EC2 auto scaling group. CodeDeploy then reroutes load balancer traffic to the new set of instances running the latest version. After traffic is rerouted to the new instances, the existing instances can be terminated. Blue/green deployments allow to test the new application version before sending production traffic to it.
If there is an issue with the newly deployed application version, it can be rolled back to the previous version faster than with in-place deployments. Additionally, the instances provisioned for the blue/green deployment will reflect the most up-to-date server configurations since they are new.

Canary Release

Canary is not very different than Blue Green Deployment, except it is risk-averse. Let’s talk about the origin of the name canary deployment. In old British mining practice canary birds were first sent to mines to detect carbon monoxide and other toxic gases, to make sure if its safe for humans.

So how it is comparable with our context? Well when we want to send a new version of an application to production, we can use a canary to make sure the new changes can survive for a broader public. With Canary deployment we can deploy a new application code to a smaller part of production infrastructure and make only a small percentage of real users routed to it. This minimizes any impact. With no bugs or errors reported, the new version can be gradually rolled out to the rest of the live users.

Alt Text

Benefits include:

• With any error reported we can safely roll back to older version
• No downtime in case of any issue
• Multiple layers of testing, including internal testing and Capacity testing of new version in a production environment
• Can be monitored about how the new version impacts
• Based on business need it can be released in a specific geographic region
• Based on business need a new feature can be released to specific users and groups

Azure

To achieve it, I created 2 identical deployment slots in Azure. One is Production, another is Canary slot. Deployment slot traffic percentage (Traffic %) feature helps to achieve this methodology. After the slot has been created, user will notice that the percentage of traffic that goes to the main slot is 100 per cent while the new one is set to 0 percent. Upon changing the Canary slot traffic percentage, system will split any incoming request to the production URL between the production slot and the canary slot. In other words, requested percent of users will see new changes, while the rest will see old changes.

AWS

This is available for Lambda (serverless) with CodeDeploy. There are predefined values like ‘LambdaCanary10Percent5Minutes’ (for the new lambda version, 10 percent of the traffic will be routed to canary, and then the rest traffic will be routed) does traffic shifting in 2 increments whereas ‘LambdaLinear10PercentEvery1Minute’ (every 1-minute 10 percent more traffic will go for new version deployed) does that incrementally. And you can customize it as well as per your need.
Image for post

Top comments (0)