DEV Community

Cover image for Managing high traffic applications with aws elastic load balancer and terraform
Otu Udo
Otu Udo

Posted on

Managing high traffic applications with aws elastic load balancer and terraform

Introduction
Managing high-traffic applications can be a daunting challenge, especially when trying to ensure that the system is both scalable and highly available. As businesses grow and their user bases expand, the ability to scale infrastructure dynamically becomes crucial. AWS Elastic Load Balancer (ELB) is one of the most powerful tools for distributing incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, or IP addresses. When paired with Terraform, an infrastructure-as-code tool, it becomes even easier to provision, manage, and scale high-traffic applications.

We Will walk through how to manage high-traffic applications using AWS Elastic Load Balancer (ELB) with Terraform. We'll focus on creating a scalable infrastructure with auto-scaling and automatic traffic distribution.

What is AWS Elastic Load Balancer?

AWS Elastic Load Balancer (ELB) is a fully managed service that automatically distributes incoming application traffic across multiple targets to ensure that your application performs reliably. ELB helps achieve:

  • High Availability: Automatically adjusts traffic to healthy instances across multiple availability zones.
  • Scalability: Adjusts the load based on the demand by distributing the traffic across multiple EC2 instances.
  • Security: Handles the SSL/TLS termination and secures the communication between clients and your application.

AWS offers three types of load balancers:

  1. Application Load Balancer (ALB): Ideal for HTTP/HTTPS traffic and provides advanced routing capabilities.
  2. Network Load Balancer (NLB): Designed for handling TCP traffic and capable of scaling to millions of requests per second.
  3. Classic Load Balancer (CLB): The legacy load balancer, offering basic functionality for both HTTP and TCP traffic.

In this example, we will use Application Load Balancer (ALB), which is perfect for web applications and microservices, and configure it using Terraform.

Setting Up the Infrastructure

Diagram

Let's walk through how to set up a high-traffic web application with auto-scaling and ELB using Terraform.

Step 1: Define the AWS Provider

First, we'll define the AWS provider in our main.tf file. This will ensure that Terraform can interact with AWS to create the required resources.

provider "aws" {
  region = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Create Security Groups

To allow traffic to your web servers, you need a security group that permits HTTP and SSH traffic.

resource "aws_security_group" "web_sg" {
  name        = "web_sg"
  description = "Allow HTTP and SSH"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Create Launch Template

The Launch Template defines the configuration for launching EC2 instances. This includes the AMI to use, instance type, SSH key for access, and the user data to install necessary software (like Apache).

resource "aws_launch_template" "web_launch_template" {
  name          = "web-server-template"
  image_id      = "ami-0453ec754f44f9a4a" # Amazon Linux 2
  instance_type = "t2.micro"
  key_name      = var.key_name

  user_data = base64encode(<<-EOF
              #!/bin/bash
              sudo yum update -y
              sudo yum install -y httpd
              sudo systemctl start httpd
              sudo systemctl enable httpd
              echo "Hello from Server $(hostname)" > /var/www/html/index.html
              EOF
  )

  network_interfaces {
    security_groups = ["sg-0a189155b7cc342c0"]
  }

  tag_specifications {
    resource_type = "instance"
    tags = {
      Name = "web-server"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Create Auto Scaling Group

Now, let's create the Auto Scaling Group (ASG). This resource ensures that the number of EC2 instances scales based on demand.

resource "aws_autoscaling_group" "web_asg" {
  desired_capacity     = 2
  max_size             = 5
  min_size             = 1
  launch_template {
    id      = aws_launch_template.web_launch_template.id
    version = "$Latest"
  }

  vpc_zone_identifier = ["subnet-036596d61d8685f5a", "subnet-08a1646c47df46597"]

  target_group_arns = [aws_lb_target_group.web_lb_tg.arn]

  tag {
    key                 = "Name"
    value               = "web-server-asg"
    propagate_at_launch = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 5: Create Load Balancer

Now, we define the Load Balancer (ALB). This resource distributes incoming traffic across the instances in your Auto Scaling Group.

resource "aws_lb" "web_lb" {
  name               = "web-cluster-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.web_sg.id]
  subnets            = ["subnet-036596d61d8685f5a", "subnet-08a1646c47df46597"]
}
Enter fullscreen mode Exit fullscreen mode

Step 6: Create Target Group and Listener

Next, we create a Target Group and a Listener for the Load Balancer. The target group defines which EC2 instances the traffic will be directed to, while the listener defines the rules for forwarding the traffic.

resource "aws_lb_target_group" "web_lb_tg" {
  name        = "web-target-group"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = "vpc-06c4398e7067f32b4"
  target_type = "instance"
}

resource "aws_lb_listener" "web_lb_listener" {
  load_balancer_arn = aws_lb.web_lb.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.web_lb_tg.arn
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 7: Create CloudWatch Alarm

Finally, we define a CloudWatch Alarm to monitor CPU usage. If CPU utilization exceeds 80%, an SNS notification will be triggered.

# SNS Topic Subscription (Email)
resource "aws_sns_topic_subscription" "email_subscription" {
  topic_arn = aws_sns_topic.alarm_topic.arn
  protocol  = "email"
  endpoint  = "otumiky@gmail.com" # Replace with your email address
}
resource "aws_cloudwatch_metric_alarm" "ec2_cpu_utilization" {
  alarm_name          = "HighCPUUtilizationEC2"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = 2
  metric_name         = "CPUUtilization"
  namespace           = "AWS/EC2"
  period              = 60
  statistic           = "Average"
  threshold           = 80

  dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.web_asg.name
  }

  alarm_actions = [aws_sns_topic.alarm_topic.arn]
}
Enter fullscreen mode Exit fullscreen mode

SNS topic

CloudWatch

Conclusion

Terraform makes it easy to define, deploy, and manage your infrastructure, enabling you to automate the provisioning of resources and scale your applications effectively. By implementing this solution, you’ll be able to handle traffic spikes seamlessly while maintaining optimal performance and uptime.
By leveraging auto-scaling, load balancing, and CloudWatch monitoring, you can ensure that your application remains available and responsive under varying levels of traffic.


Top comments (0)