Deploying a highly available web application on AWS using Terraform involves several key components to ensure redundancy, scalability, and fault tolerance. This will guide you through deploying a clustered web server on AWS with Terraform using the Auto Scaling Group(ASG) and the Application Load Balancer.
Learning Outcome
Master deploying highly available infrastructure using Terraform by understanding the following;
- Set up an Auto Scaling Group (ASG) for managing server instances.
- Configure an Application Load Balancer to distribute traffic.
- Use a Launch Template for consistent instance provisioning.
- Test the deployment.
Overview of Components
- VPC: Create a Virtual Private Cloud to host your resources.
- Subnets: Use multiple availability zones with public and private subnets.
- Load Balancer: Use an Application Load Balancer (ALB) to distribute traffic.
- EC2 Instances: Launch instances in multiple availability zones.
- Auto Scaling Group: Automatically scale your instances based on demand.
- Security Groups: Control inbound and outbound traffic.
Terraform Script Overview
Below is the terraform configuration file to set up the infrastructure:
main.tf
provider "aws" {
region = "us-east-1"
}
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
}
resource "aws_subnet" "public" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 1}.0/24"
availability_zone = element(data.aws_availability_zones.available.names, count.index)
map_public_ip_on_launch = true
}
resource "aws_route_table_association" "public" {
count = 2
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
resource "aws_subnet" "private" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = "10.0.${count.index + 3}.0/24"
availability_zone = element(data.aws_availability_zones.available.names, count.index)
}
resource "aws_security_group" "web_sg" {
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Consider restricting this in production
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Consider restricting this in production
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource "aws_lb" "app_lb" {
name = "app-lb" # Ensure this is unique
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.web_sg.id]
subnets = aws_subnet.public[*].id
# depends_on = [aws_security_group.web_sg]
}
resource "aws_launch_template" "app" {
name_prefix = "app-launch-template-"
image_id = "ami-0b0ea68c435eb488d"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
}
network_interfaces {
associate_public_ip_address = true
security_groups = [aws_security_group.web_sg.id]
}
}
resource "aws_autoscaling_group" "app_asg" {
desired_capacity = 2
max_size = 5
min_size = 2
vpc_zone_identifier = aws_subnet.private[*].id
launch_template {
id = aws_launch_template.app.id
version = "$Latest"
}
tag {
key = "Name"
value = "app-instance"
propagate_at_launch = true
}
}
resource "aws_lb_target_group" "app_tg" {
name = "app-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
health_check {
path = "/"
interval = 30
timeout = 5
healthy_threshold = 2
unhealthy_threshold = 2
}
}
resource "aws_lb_listener" "front_end" {
load_balancer_arn = aws_lb.app_lb.arn
port = 80
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.app_tg.arn
}
}
output "load_balancer_dns" {
value = aws_lb.app_lb.dns_name
}
Deploying the Infrastructure
Initialize Terraform:
Review the Plan:
Apply the Configuration:
Confirm the action by typing your key_value and yes.
Verifying the Deployment
- Load Balancer:
To see the DNS NAME of the ALB, you can access the AWS console or check the output from the code:
Auto Scaling
Conclusion
By leveraging Terraform, you’ve created a scalable and robust web server deployment. The Auto Scaling Group ensures high availability, while the Load Balancer distributes traffic efficiently. This setup serves as a strong foundation for more advanced infrastructure automation.
Have you tried deploying an Auto Scaling web server? Share your thoughts and experiences in the comments!
Top comments (0)