Introduction
Running a single server is a good start, but in the real world, a single server is a single point of failure. If that server crashes, or if it becomes overloaded from too much traffic, users will be unable to access your site. The solution is to run a cluster of
servers, routing around servers that go down and adjusting the size of the cluster up or down based on traffic.
Managing such a cluster manually is a lot of work. Fortunately, you can let AWS take care of it for you by using an Auto Scaling Group (ASG). An ASG takes care of a lot of tasks for you completely automatically, including launching a cluster of EC2 Instances, monitoring the health of each instance, replacing failed instances and adjusting the size of the cluster in response to load.
- Create repository This is how i have organized my project:
terraform/
├── main.tf # All resource definitions
├── variables.tf # Variable declarations
└── README.md
- Define Variables To allow you to make your code more DRY_(Don't Repeat Yourself)_ and more configurable, Terraform allows you to define input variables.
variable "server_port" {
description = "The port the server will use for HTTP requests"
type = number
default = 8080
}
variable "ami_id" {
description = "Amazon Machine Image (AMI) ID for the EC2 instance"
type = string
default = "ami-0aaa636894689fa47"
}
You'll define the above inside variables.tf file.
- Using Auto Scaling Group ASG An ASG takes care of a lot of tasks for you completely automatically, including launching a cluster of EC2 Instances, monitoring the health of each Instance, replacing failed instances and adjusting the size of the cluster in response to load. The first step in creating an ASG is to create a launch tempalte in main.tf.
resource "aws_launch_template" "instance1" {
image_id = var.ami_id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.instanceSecurityGroup.id]
user_data = base64encode(<<-EOF
#!/bin/bash
echo "Hello, World" > index.html
nohup busybox httpd -f -p ${var.server_port} &
EOF
)
# Required when using a launch configuration with an auto scaling group.
lifecycle {
create_before_destroy = true
}
}
Now you can create the ASG itself using the aws_autoscaling_group resource:
resource "aws_autoscaling_group" "asg1" {
launch_template {
id = aws_launch_template.instance1.id
version = "$Latest"
}
vpc_zone_identifier = data.aws_subnets.default.ids
min_size = 2
max_size = 5
tag {
key = "Name"
value = "terraform-asg-example"
propagate_at_launch = true
}
}
There’s also one other parameter that you need to add to your ASG to make it work: subnet_ids. This parameter specifies to the ASG into which VPC subnets the EC2 instances should be deployed. Each subnet lives in an isolated AWS AZ (that is, isolated datacen‐
ter), so by deploying your Instances across multiple subnets, you ensure that your service can keep running even if some of the datacenters have an outage. You could hardcode the list of subnets, but that won’t be maintainable or portable, so a better
option is to use data sources to get the list of subnets in your AWS account.
data "aws_vpc" "default" {
default = true
}
data "aws_subnets" "default" {
filter {
name = "vpc-id"
values = [data.aws_vpc.default.id]
}
}
- Creating EC2 Instances If this is a new project you'll need to first run this command:
terraform init
However, if you've had initiated before you can apply the changes:
terraform apply
Conclusion
And with that we are able to deploy more than one instance. Throughout this series, we'll see how the right combination of Auto Scaling Groups, launch templates, security groups, and multi-AZ networking can transform a fragile single-instance setup into a resilient, self-healing system.

Top comments (0)