DEV Community

Otu Udo
Otu Udo

Posted on

Mastering Zero-Downtime Deployments with Terraform

Introduction

In today's fast-paced digital world, ensuring your applications run with minimal disruptions is crucial. Zero downtime deployment is a powerful strategy that allows you to roll out updates to your systems without taking your services offline. One effective way to implement this strategy is through Blue-Green Deployment. In this post, I’ll walk you through a simple Terraform configuration that demonstrates how you can set up a Blue-Green deployment using AWS services such as EC2, Load Balancers, and Auto Scaling Groups, ensuring zero downtime during your application updates.

Why Blue-Green Deployment?

Blue-Green deployment is a strategy that reduces downtime by maintaining two identical environments: one active (Blue) and one idle (Green). When deploying a new version of an application:

  1. The new version is deployed to the idle environment (Green).
  2. Once the new version is tested and verified, traffic is switched to the Green environment.
  3. The old environment (Blue) is then idle, and you can roll back to it if needed.

This method ensures that there’s no downtime during the switch.

Key Components in Our Setup

To implement Blue-Green Deployment with Terraform, we’ll need a few key resources:

  1. Elastic Load Balancer (ALB): The load balancer will distribute traffic between the Blue and Green environments.
  2. Target Groups: We’ll set up two target groups—one for the Blue environment and one for the Green.
  3. Auto Scaling Groups: These will manage the EC2 instances in each environment, scaling them based on demand.
  4. Launch Configurations: These define the AMI, instance type, and other settings for the EC2 instances.

The Terraform Setup

Let's dive into the Terraform configuration, split into two files: main.tf for the main resources and variables.tf for input variables.

variables.tf – Input Variables

Here we define the input variables that make the setup flexible and customizable.

variable "environment" {
  description = "Deployment environment (blue or green)"
  type        = string
  default     = "blue"
}

variable "app_port" {
  description = "The port the application listens on"
  type        = number
  default     = 8080
}

variable "ami_id" {
  description = "AMI ID for EC2 instances"
  type        = string
  default = "ami-0e2c8caa4b6378d8c"
}

variable "instance_type" {
  description = "EC2 instance type"
  type        = string
  default     = "t2.micro"
}
Enter fullscreen mode Exit fullscreen mode
  • environment: Specifies whether the deployment is for the Blue or Green environment.
  • ami_id: The AMI ID to be used for the EC2 instances.
  • instance_type: Defines the EC2 instance type.
  • app_port: The port on which your application listens (default is 8080).

main.tf – Core Resources

Now, let's define the core AWS resources that enable Blue-Green deployment.

terraform {
  required_version = ">= 1.0.0, < 2.0.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

resource "aws_security_group" "alb" {
  name = "bg-alb-sg"
}

resource "aws_security_group_rule" "allow_http_inbound" {
  type              = "ingress"
  security_group_id = aws_security_group.alb.id
  from_port         = 80
  to_port           = 80
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
}

data "aws_vpc" "default" {
  default = true
}

data "aws_subnets" "default" {
  filter {
    name   = "vpc-id"
    values = [data.aws_vpc.default.id]
  }
}

resource "aws_lb" "example" {
  name               = "blue-green-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb.id]
  subnets            = data.aws_subnets.default.ids
}

resource "aws_lb_target_group" "blue" {
  name     = "blue-target-group"
  port     = 80
  protocol = "HTTP"
  vpc_id   = data.aws_vpc.default.id
}

resource "aws_lb_target_group" "green" {
  name     = "green-target-group"
  port     = 80
  protocol = "HTTP"
  vpc_id   = data.aws_vpc.default.id
}

resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.example.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type               = "forward"
    target_group_arn   = var.environment == "blue" ? aws_lb_target_group.blue.arn : aws_lb_target_group.green.arn
  }
}

resource "aws_launch_configuration" "blue" {
  name          = "blue-launch-configuration"
  image_id      = var.ami_id
  instance_type = var.instance_type
  security_groups = [aws_security_group.alb.id]
}

resource "aws_launch_configuration" "green" {
  name          = "green-launch-configuration"
  image_id      = var.ami_id
  instance_type = var.instance_type
  security_groups = [aws_security_group.alb.id]
}

resource "aws_autoscaling_group" "blue" {
  desired_capacity     = 2
  max_size             = 3
  min_size             = 1
  vpc_zone_identifier  = data.aws_subnets.default.ids
  launch_configuration = aws_launch_configuration.blue.id
  target_group_arns    = [aws_lb_target_group.blue.arn]
}

resource "aws_autoscaling_group" "green" {
  desired_capacity     = 2
  max_size             = 3
  min_size             = 1
  vpc_zone_identifier  = data.aws_subnets.default.ids
  launch_configuration = aws_launch_configuration.green.id
  target_group_arns    = [aws_lb_target_group.green.arn]
}
Enter fullscreen mode Exit fullscreen mode

Breakdown of Key Resources

  1. Security Groups: A security group is created to allow inbound HTTP traffic (port 80) to the Load Balancer.
  2. VPC and Subnets: Data sources are used to get the default VPC and subnets in the region for the Load Balancer and EC2 instances.
  3. Load Balancer: The Application Load Balancer (ALB) distributes traffic between the Blue and Green target groups.
  4. Target Groups: Two target groups are defined—one for each environment (Blue and Green).
  5. Auto Scaling Groups: Each environment (Blue and Green) has its own Auto Scaling Group with launch configurations, ensuring the appropriate EC2 instances are deployed.

Zero Downtime Deployment with Terraform

The magic of Zero Downtime Deployment lies in the way we manage traffic and scaling:

  • Traffic Switching: By using the load balancer listener’s default action, traffic is routed to the active environment (Blue or Green). Once the new environment is ready, you can simply switch traffic from Blue to Green without downtime.
  • Auto Scaling: The Auto Scaling groups ensure that there are always the right number of instances running, scaling up or down based on demand.

Conclusion

With this simple Terraform configuration, you can implement a Blue-Green deployment strategy on AWS, achieving zero downtime during your updates. By utilizing Terraform’s infrastructure-as-code approach, you can ensure your deployments are repeatable, reliable, and scalable, allowing you to focus on delivering value to your users rather than worrying about downtime.

Let me know how you’ve implemented similar strategies, or if you have any questions about the code! Happy deploying! 🚀

Top comments (0)