Welcome back to yet another learning experience with me.
In this tutorial you are going to learn how to use terraform to deploy a high availability and resilient cloud environment on AWS equipped with an Auto Scaling Group (ASG) spanning two Availability Zones in private subnets of a customized VPC. The placement of an Application Load Balancer (ALB) on public subnets, along with the proper gateway and route table configurations, will be used to front the ASG.
By the end of the tutorial, you'll have a firm grasp on how to use Terraform to set up a dependable and scalable cloud architecture that can manage heavy traffic loads and sustain uptime even in the case of component failures.
Prerequisites
- Familiarity with and understanding of the fundamental Terraform basic concepts and commands
- Understanding of the Linux command line
- IAM user with administrative rights on an AWS account
- Pair of Amazon EC2 keys
Case Study
An e-commerce business, JULS EXPRESS, must manage an increase in traffic over the Holiday's season. This organization wants to make sure that even during times of high traffic, its website is accessible and responsive to users.
Your employer has given you the task of setting up an AWS cloud infrastructure that will provide high availability and resilient You decide to setup an EC2 Auto Scaling Group (ASG) in private subnets fronted by an Application Load Balancer (ALB) in public subnets using Terraform to automate infrastructure. This ASG will automatically scale up or down based on traffic, ensuring that the website is always responsive to users.
Objectives
- Create a custom Virtual Private Network (VPC) Using two public subnets and two private subnets in two different Availability Zones,
- Use an Internet Gateway and a NAT Gateway on the public subnet to enable outgoing internet traffic.
- Create a private route table and a public route table.
- In the public subnets, start the ALB.
- Open the private subnets and launch the Auto Scaling group.
- Release the ALB's public DNS, and then use its URL to confirm that the web servers can be reached.
Without any further delay, let us begin the tutorial.
1. Establishing and setting up a unique AWS VPC
Our first goal is to create a unique VPC in our AWS environment where we can deploy all of our resources. To do this, a logically isolated CIDR block must be defined, and the suitable subnets and accompanying availability zones (AZ) to deploy them in must be carefully chosen.
Let's scrutinize the code in the paragraphs that follow Terraform file to establish and set up the unique VPC —
# Create an AWS VPC
resource "aws_vpc" "terraform-vpc" {
cidr_block = var.vpc-cidr
instance_tenancy = "default"
tags = {
Name = var.vpc_name
}
}
# Create first public subnet in the VPC
resource "aws_subnet" "pub-sub1" {
vpc_id = aws_vpc.terraform-vpc.id
cidr_block = var.pub_sub1_cidr
availability_zone = var.availability_zone-1
map_public_ip_on_launch = true
tags = {
Name = var.pub-sub1-name
}
}
# Create second public subnet in the VPC
resource "aws_subnet" "pub-sub2" {
vpc_id = aws_vpc.terraform-vpc.id
cidr_block = var.pub_sub2_cidr
availability_zone = var.availability_zone-2
map_public_ip_on_launch = true
tags = {
Name = var.pub-sub2-name
}
}
# Create first private subnet in the VPC
resource "aws_subnet" "priv-sub1" {
vpc_id = aws_vpc.terraform-vpc.id
cidr_block = var.priv_sub1_cidr
availability_zone = var.availability_zone-1
map_public_ip_on_launch = true
tags = {
Name = var.priv-sub1-name
}
}
# Create second private subnet in the VPC
resource "aws_subnet" "priv-sub2" {
vpc_id = aws_vpc.terraform-vpc.id
cidr_block = var.priv_sub2_cidr
availability_zone = var.availability_zone-2
map_public_ip_on_launch = true
tags = {
Name = var.priv-sub2-name
}
}
Code Interpretation
Here, a particular CIDR block and instance tenancy are used to build an AWS VPC. Then, two public and two private subnets are created. Each subnet is designed to be deployed in certain availability zones, has a specific CIDR block, and is connected to the VPC that was previously constructed.
Instances started on public subnets will immediately receive a public IP address since those subnets have the map_public_ip_on_launch attribute set to true. That property is set to false for the private subnets, so instances launched there won't be reachable from the internet unless they go via a NAT gateway that we'll set up in the following step. A tag is used to assign a name to each subnet.
2. Setup a NAT and Internet gateway in the public subnet.
It is essential to make sure that resources are securely and effectively linked to the internet before putting them in the VPC. Making an Internet Gateway and NAT Gateway in a public subnet is one technique to do this.
A VPC component called an Internet Gateway enables connectivity between instances in a VPC and the internet. In the meantime, a NAT Gateway is used to allow connections from instances in private subnets to the internet or other AWS services while additionally securing outbound traffic.
Examine the code of the Terraform file that creates and configures an Internet Gateway and NAT Gateway in the section below. —
# Create an internet gateway and associate it with the VPC
resource "aws_internet_gateway" "terraform-igw" {
vpc_id = aws_vpc.terraform-vpc.id
tags = {
Name = var.igw-name
}
}
# Create an Elastic IP address
resource "aws_eip" "ngw-eip" {
vpc = true
}
# Create a NAT gateway and associate it with an Elastic IP and a public subnet
resource "aws_nat_gateway" "terraform-ngw" {
allocation_id = aws_eip.ngw-eip.id # Associate the NAT gateway with the Elastic IP
subnet_id = aws_subnet.pub-sub1.id # Associate the NAT gateway with a public subnet
tags = {
Name = var.nat-gw-name
}
depends_on = [aws_internet_gateway.terraform-igw] # Make sure the internet gateway is created before creating the NAT gateway
}
Code Interpretation
First, we setup an Internet Gateway and attach it to the unique VPC. Additionally, we establish a NAT Gateway, an Elastic IP address, and connect them to a public subnet. Additionally, we make sure to construct the Internet Gateway before the NAT Gateway.
Great! Now that our NAT gateway and Internet gateway have been established, we can go on to Step 3 and configure our public and private route tables.
3. Set up a a public route table and a private route table
We have now successfully setup the gateways and the customized VPC. The issue which we now have, though, is how to control communication between the services and other VPC-deployed components. At this point, it is necessary to configure the network traffic flow within the VPC. Setting up routing tables to guide network traffic between the subnets is one efficient approach to accomplish this.
Let's look at the code below to see how to establish and set up each of our individual route tables.
# Creates a public route table with a default route to the internet gateway
resource "aws_route_table" "pub-rt" {
vpc_id = aws_vpc.terraform-vpc.id
# Create a default route for the internet gateway with destination 0.0.0.0/0
route {
cidr_block = var.pub_rt_cidr
gateway_id = aws_internet_gateway.terraform-igw.id
}
tags = {
Name = var.pub-rt-name
}
}
# Creates a private route table with a default route to the NAT gateway
resource "aws_route_table" "priv-rt" {
vpc_id = aws_vpc.terraform-vpc.id
route {
cidr_block = var.priv_rt_cidr
gateway_id = aws_nat_gateway.terraform-ngw.id
}
tags = {
Name = var.priv-rt-name
}
}
# Associates the public route table with the public subnet 1
resource "aws_route_table_association" "pub-sub1-rt-ass" {
subnet_id = aws_subnet.pub-sub1.id
route_table_id = aws_route_table.pub-rt.id
}
# Associates the public route table with the public subnet 2
resource "aws_route_table_association" "pub-sub2-rt-ass" {
subnet_id = aws_subnet.pub-sub2.id
route_table_id = aws_route_table.pub-rt.id
}
# Associates the private route table with the private subnet 1
resource "aws_route_table_association" "priv-sub1-rt-ass" {
subnet_id = aws_subnet.priv-sub1.id
route_table_id = aws_route_table.priv-rt.id
# Wait for the private route table to be created before creating this association
depends_on = [aws_route_table.priv-rt]
}
# Associates the private route table with the private subnet 2
resource "aws_route_table_association" "priv-sub2-rt-ass" {
subnet_id = aws_subnet.priv-sub2.id
route_table_id = aws_route_table.priv-rt.id
# Wait for the private route table to be created before creating this association
depends_on = [aws_route_table.priv-rt]
}
Code Interpretation
A public route table and a private route table are being created here. Both the private and public route tables include default routes to the NAT Gateway and the Internet Gateway, respectively.
The two public subnets are connected to the public route table, while the two private subnets are connected to the private route table. The private route table must exist prior to the creation of the private resource associations. Using tags, names have also been given to the route tables.
Now let's move on to Step 4 where we will Launch a fresh ALB in the open subsets.
4.Launch the ALB in the public subnet
Let's look into how to launch an ALB in the public subnets of the unique VPC. There are various phases involved in setting up an ALB, such as creating a Security Group, configuring Target Groups, listeners, and routing rules.
Review the Terraform code below, which sets up a Security Group connected to our ALB-
# Create security group for ALB
resource "aws_security_group" "alb-sg" {
# Set name and description of the security group
name = var.alb_sg_name
description = var.alb_sg_description
# Set the VPC ID where the security group will be created
vpc_id = aws_vpc.terraform-vpc.id
depends_on = [aws_vpc.terraform-vpc]
# Inbound Rule
# HTTP access from anywhere
ingress {
description = "Allow HTTP Traffic"
from_port = var.http_port
to_port = var.http_port
protocol = "tcp"
cidr_blocks = var.alb_sg_ingress_cidr_blocks
}
# SSH access from anywhere
ingress {
description = "Allow SSH Traffic"
from_port = var.ssh_port
to_port = var.ssh_port
protocol = "tcp"
cidr_blocks = var.alb_sg_ingress_cidr_blocks
}
# Inbound Rule
# Allow all egress traffic
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = var.alb_sg_egress_cidr_blocks
}
# Set tags for the security group
tags = {
Name = "ALB SG"
}
}
Code Interpretation
We set up a Security Group in this Terraform code that will work with the ALB. The custom VPC is connected to the Security Group, which is defined with a name and description.
It has egress rules that permit all traffic and entrance rules that permit HTTP and SSH traffic from a certain CIDR block. In order to make the Security Group easier to identify, we also give it a name.
Now that the ALB has been created and configured, we can review the code below.
# Create a new load balancer
resource "aws_lb" "pub-sub-alb" {
name = var.load_balancer_name
subnets = [aws_subnet.pub-sub1.id, aws_subnet.pub-sub2.id]
security_groups = [aws_security_group.alb-sg.id]
tags = {
Name = "Pub-Sub-ALB"
}
}
# Create a target group for the load balancer
resource "aws_lb_target_group" "alb-tg" {
name = var.target_group_name
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.terraform-vpc.id
# Set the health check configuration for the target group
health_check {
interval = 60
path = "/"
port = 80
timeout = 45
protocol = "HTTP"
matcher = "200,202"
}
}
# Create ALB listener
resource "aws_lb_listener" "alb-listener" {
load_balancer_arn = aws_lb.pub-sub-alb.arn
port = "80"
protocol = "HTTP"
# Set the default action for the listener
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.alb-tg.arn
}
}
Code Interpretation
Here, we construct an ALB and the resources it needs. The ALB is first created, and then the name, subnets, and security groups are specified.
We then create a Target Group with the name, port, protocol, and settings for the health check for the ALB to direct traffic to.
The last step is to build a listener for the ALB, defining the port and protocol to use, and setting the default action to send traffic to the previously formed Target Group. As a result, traffic may be routed to the correct resources based on the configuration of the Target Group and the listener rules.
5. Open the private subnets and start the Auto Scaling Group.
While private subnets are frequently chosen in situations involving sensitive data or regulatory compliance, Auto Scaling Groups (ASG) are frequently implemented in public subnets.
Let's examine the code that launches an EC2 ASG across two Availability Zones (AZ) on private subnets using the methods listed below —
# Creating Security Group for ASG Launch Template
resource "aws_security_group" "lt-sg" {
name = var.lt_sg_name
vpc_id = aws_vpc.terraform-vpc.id
# Inbound Rules
# HTTP access from anywhere
ingress {
from_port = var.http_port
to_port = var.http_port
protocol = "tcp"
security_groups = [aws_security_group.alb-sg.id]
}
# SSH access from anywhere
ingress {
from_port = var.ssh_port
to_port = var.ssh_port
protocol = "tcp"
security_groups = [aws_security_group.alb-sg.id]
}
# Outbound Rules
# Internet access to anywhere
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = var.lt_sg_egress_cidr_blocks
}
}
Code Interpretation
First, a Security Group for the ASG Launch Template must be built and linked to the previously established custom VPC.
The Security Group only permits incoming HTTP and SSH communication from the Security Group for the ALB, which is specified in a separate resource. As only the ALB may get access to our ASG, this increases security. We also permit outbound traffic to any location with a CIDR block given in a defined variable since Security Groups are stateful.
Shell Script
We will first create a bash script that will be put into the ASG's Launch Template in order to get ready to create our ASG. This script will act as the user data for the EC2 instance, automating the setup and start-up of the Apache web service as well as the production of a unique web page.
Go over the bash script that is below please feel free to edit the html part:
#!/bin/bash
#Update all yum package repositories
yum update -y
#Install Apache Web Server
yum install -y httpd.x86_64
#Start and Enable Apache Web Server
systemctl start httpd.service
systemctl enable httpd.service
#Install epel for easy access to install packages for commonly used software
amazon-linux-extras install epel -y
#Install stress to enable us to test EC2 Instance under stress work loads
yum install stress -y
#Adds our custom webpage html code to "index.html" file.
echo "<html><body><h1>Welcome to Juls Express!</h1></body><html>" > /var/www/html/index.html
Code Interpretation
The Apache Web Server package is installed using yum in this script when all the yum package repositories have been updated. To make sure the Apache service begins when the computer boots, we must start and activate it.
The Extra Packages for Enterprise Linux (EPEL) repository, which enables simple installation of widely used software packages, is also installed by the script.
The stress package is also installed by the script, which may be used to test the instance under demanding workloads. Last but not least, the script replaces the standard index.html file in the Apache document root directory /var/www/html/ with a customized webpage with a Welcome to Juls express! message.
After you have created all of these files, you may execute the following commands to build your deploy a high availability and resilient cloud environment:
terraform init
This command helps Terraform get started and downloads the required provider plugins, in this instance the AWS provider.
To create a list of all the adjustments Terraform will make, let's run the following command. —
terraform plan
It should be possible to see a list of the adjustments Terraform is expected to make to the infrastructure resources. What will be added is indicated by the "+" symbol, and what will be deleted is indicated by the "-" sign.
Let's proceed with setting this infrastructure in place! To implement the modifications and allocate the resources, run the following command.
terraform apply -auto-approve
This command allows you to apply everything that has been written to your AWS account and it also stop the prompt that will require you too type the "yes" command to proceed with deployment
Excellent!
The procedure should now come to an end with a message that says "Apply complete" and lists the total number of resources that were added, updated, and removed along with a number of resource outputs.
The ALB's DNS URL must be copied and saved in order to visit the website from a browser.
Let's check the Management Console to see whether our resources have been established there.
9. Confirm that an EC2 instance was created from an ASG and that the target group is healthy.
Also, select Target groups in the left pane by scrolling down. Verify that the instances' Health state is healthy by selecting the newly established Target group, scrolling down, as shown below.
Great! Our ASG created expected EC2 instances, and we have successfully verified that all of our Target groups' health statuses are showing as healthy. Let's check if our ALB can connect to our ASG's web servers
10. Using the Web server's URL in a browser, confirm that it can be reached.
Copy the DNS URL for the ALB and paste it into your preferred browser.
Note: When contacting the ALB, be sure to utilize the "http://" rather than the "https://" protocol.
Congratulations!
You now know how to use Terraform to create a dependable, scalable cloud architecture that can manage heavy traffic loads and continue to function even when individual components fail.
Destroy infrastructure and clean up
To remove/delete/tear down all previously provisioned resources from Terraform, use the following command:
terraform destroy
Wait for it to finish. You ought to get a prompt at the conclusion saying "Destroy complete" together with the quantity of resources that were destroyed.
If you've read this far, I appreciate it. I hope you found it helpful . see you in the next one
Top comments (0)