Some time back, I completed a project on the high availability architecture on AWS and another with CloudFormation. Now, in this article I am going to show how to deploy a two-tier AWS architecture with Terraform based on the following specifications. As I already know which AWS resources are needed for this project, it became little easy for me to complete it.
Objectives:
- Custom VPC with CIDR 10.0.0.0/16.
- Two Public Subnets with CIDR 10.0.1.0/24 and 10.0.2.0/24 in different Availability Zones for high availability.
- Two Private Subnets with CIDR 10.0.3.0/24 and 10.0.4.0/24 in different Availabilty Zones.
- RDS MySQL instance (micro) in One of the Two Private Subnets.
- One Application Load Balancer (External) — Internet facing, which will direct the traffic to the Public Subnets.
- One EC2 t2.micro instance in each Public Subnet
Terraform is HashiCorp’s open source infrastructure as code tool. It lets you define resources and infrastructure in human-readable, declarative configuration files, rather than through a graphical user interface. It can manage infrastructure on multiple cloud platforms, Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP), Kubernetes, GitHub, Splunk, and DataDog, just to name a few.
Resources Used:
I have used Terraform documentation (use the navigation to the left to read about the available resources) and Derek Morgan’s course for this project.
Pre-Requisites:
- Access to AWS Console with an AWS Account (not root account).
- Cloud9 IDE
- GitHub account
Let’s begin!
Go to Cloud9 IDE and issue these commands in the same order.
mkdir <directory>
cd <directory>
touch variables.tf
touch main.tf
touch secrets.tfvars
1. variables.tf
This file consists of input variables and by defining values to those variables, the end users can assign them to customize the configuration. I used Customize Terraform Configuration with Variables for creating this file.
You can see my complete code for variables.tf in Github repository.
# --- root/Terraform_projects/terraform_two_tier_architecture/variables.tf
# custom VPC variable
variable "vpc_cidr" {
description = "custom vpc CIDR notation"
type = string
default = "10.0.0.0/16"
}
# public subnet 1 variable
variable "public_subnet1" {
description = "public subnet 1 CIDR notation"
type = string
default = "10.0.1.0/24"
}
# public subnet 2 variable
variable "public_subnet2" {
description = "public subnet 2 CIDR notation"
type = string
default = "10.0.2.0/24"
}
# private subnet 1 variable
variable "private_subnet1" {
description = "private subnet 1 CIDR notation"
type = string
default = "10.0.3.0/24"
}
# private subnet 2 variable
variable "private_subnet2" {
description = "private subnet 2 CIDR notation"
type = string
default = "10.0.4.0/24"
}
# AZ 1
variable "az1" {
description = "availability zone 1"
type = string
default = "us-east-1a"
}
# AZ 2
variable "az2" {
description = "availability zone 2"
type = string
default = "us-east-1b"
}
# ec2 instance ami for Linux
variable "ec2_instance_ami" {
description = "ec2 instance ami id"
type = string
default = "ami-090fa75af13c156b4"
}
# ec2 instance type
variable "ec2_instance_type" {
description = "ec2 instance type"
type = string
default = "t2.micro"
}
# db engine
variable "db_engine" {
description = "db engine"
type = string
default = "mysql"
}
# db engine version
variable "db_engine_version" {
description = "db engine version"
type = string
default = "5.7"
}
# db name
variable "db_name" {
description = "db name"
type = string
default = "my_db"
}
# db instance class
variable "db_instance_class" {
description = "db instance class"
type = string
default = "db.t2.micro"
}
# database username variable
variable "db_username" {
description = "database admin username"
type = string
sensitive = true
}
# database password variable
variable "db_password" {
description = "database admin password"
type = string
sensitive = true
}
2. main.tf
This defines a web application, including a Provider block, VPC with its networking components, Security block, Instances block (EC2 instance for Compute & RDS instance for database), an Application Load Balancer block and an Outputs block.
As you see from above, I am using many AWS services and it can get very difficult to navigate the file and follow the course of code creation. For this reason only, I have broken down the main.tf file into small .tf snippets as gists for simpler readability. I am not using modular structure for this project.
Provider block : — provider.tf
VPC components: — vpc.tf
Security block: — sg.tf
Instances block: — ec2_rds.tf
Application Load Balancer: — alb.tf
Outputs block: — outputs.tf
However, You can see my complete code for main.tf in Github repository.
Provider block:— provider.tf
- name of the provider
aws
-
source
is defined ashashicorp/aws
, which is a shorthand forregistry.terraform.io/hashicorp/aws
- a version constraint set to ~>4.23 and
- region is
us-east-1
# PROVIDER BLOCK
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.23"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-east-1"
}
VPC Block: — vpc.tf
This creates VPC with CIDR 10.0.0.0/16 and the corresponding networking components like subnets ( two public with CIDR 10.0.1.0/24 and 10.0.2.0/24 in different Availability Zones for high availability and two private with CIDR 10.0.3.0/24 and 10.0.4.0/24 blocks ), internet gateway, route tables with their associations.
# VPC BLOCK
# creating VPC
resource "aws_vpc" "custom_vpc" {
cidr_block = var.vpc_cidr
tags = {
name = "custom_vpc"
}
}
# public subnet 1
resource "aws_subnet" "public_subnet1" {
vpc_id = aws_vpc.custom_vpc.id
cidr_block = var.public_subnet1
availability_zone = var.az1
tags = {
name = "public_subnet1"
}
}
# public subnet 2
resource "aws_subnet" "public_subnet2" {
vpc_id = aws_vpc.custom_vpc.id
cidr_block = var.public_subnet2
availability_zone = var.az2
tags = {
name = "public_subnet2"
}
}
# private subnet 1
resource "aws_subnet" "private_subnet1" {
vpc_id = aws_vpc.custom_vpc.id
cidr_block = var.private_subnet1
availability_zone = var.az1
tags = {
name = "private_subnet1"
}
}
# private subnet 2
resource "aws_subnet" "private_subnet2" {
vpc_id = aws_vpc.custom_vpc.id
cidr_block = var.private_subnet2
availability_zone = var.az2
tags = {
name = "private_subnet2"
}
}
# creating internet gateway
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.custom_vpc.id
tags = {
name = "igw"
}
}
# creating route table
resource "aws_route_table" "rt" {
vpc_id = aws_vpc.custom_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
name = "rt"
}
}
# tags are not allowed here
# associate route table to the public subnet 1
resource "aws_route_table_association" "public_rt1" {
subnet_id = aws_subnet.public_subnet1.id
route_table_id = aws_route_table.rt.id
}
# tags are not allowed here
# associate route table to the public subnet 2
resource "aws_route_table_association" "public_rt2" {
subnet_id = aws_subnet.public_subnet2.id
route_table_id = aws_route_table.rt.id
}
# tags are not allowed here
# associate route table to the private subnet 1
resource "aws_route_table_association" "private_rt1" {
subnet_id = aws_subnet.private_subnet1.id
route_table_id = aws_route_table.rt.id
}
# tags are not allowed here
# associate route table to the private subnet 2
resource "aws_route_table_association" "private_rt2" {
subnet_id = aws_subnet.private_subnet2.id
route_table_id = aws_route_table.rt.id
}
Security Block: — sg.tf
This creates security groups for vpc (web_sg), webserver, and database.
# SECURITY BLOCK
# create security groups for vpc (web_sg), webserver, and database
# custom vpc security group
resource "aws_security_group" "web_sg" {
name = "web_sg"
description = "allow inbound HTTP traffic"
vpc_id = aws_vpc.custom_vpc.id
# HTTP from vpc
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
# outbound rules
# internet access to anywhere
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name = "web_sg"
}
}
# web tier security group
resource "aws_security_group" "webserver_sg" {
name = "webserver_sg"
description = "allow inbound traffic from ALB"
vpc_id = aws_vpc.custom_vpc.id
# allow inbound traffic from web
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.web_sg.id]
}
egress {
from_port = "0"
to_port = "0"
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name = "webserver_sg"
}
}
# database security group
resource "aws_security_group" "database_sg" {
name = "database_sg"
description = "allow inbound traffic from ALB"
vpc_id = aws_vpc.custom_vpc.id
# allow traffic from ALB
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = [aws_security_group.webserver_sg.id]
}
egress {
from_port = 32768
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name = "database_sg"
}
}
Instances Block: — ec2_rds.tf
This creates EC2 instances in the public subnets as “web tier” & RDS instance within the private subnet for “database tier”. Multi-az parameter is not selected as this is not the requirement for the project.
It takes a longer time 5–6 min to create this block, so be patient!
# INSTANCES BLOCK - EC2 and DATABASE
# user_data = file("install_apache.sh")
# if used with file option - get multi-line argument error
# as echo statement is long
# 1st ec2 instance on public subnet 1
resource "aws_instance" "ec2_1" {
ami = var.ec2_instance_ami
instance_type = var.ec2_instance_type
availability_zone = var.az1
subnet_id = aws_subnet.public_subnet1.id
vpc_security_group_ids = [aws_security_group.webserver_sg.id]
user_data = <<EOF
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
EC2AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo '<center><h1>This Amazon EC2 instance is located in Availability Zone:AZID </h1></center>' > /var/www/html/index.txt
sed"s/AZID/$EC2AZ/" /var/www/html/index.txt > /var/www/html/index.html
EOF
tags = {
name = "ec2_1"
}
}
# 2nd ec2 instance on public subnet 2
resource "aws_instance" "ec2_2" {
ami = var.ec2_instance_ami
instance_type = var.ec2_instance_type
availability_zone = var.az2
subnet_id = aws_subnet.public_subnet2.id
vpc_security_group_ids = [aws_security_group.webserver_sg.id]
user_data = <<EOF
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
EC2AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo '<center><h1>This Amazon EC2 instance is located in Availability Zone:AZID </h1></center>' > /var/www/html/index.txt
sed"s/AZID/$EC2AZ/" /var/www/html/index.txt > /var/www/html/index.html
EOF
tags = {
name = "ec2_2"
}
}
Application Load Balancer Block: — alb.tf
This creates an “internet facing” external application load balancer which will direct the traffic to the public subnets.
# ALB BLOCK
# only alpha numeric and hyphen is allowed in name
# alb target group
resource "aws_lb_target_group" "external_target_g" {
name = "external-target-group"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.custom_vpc.id
}
resource "aws_lb_target_group_attachment" "ec2_1_target_g" {
target_group_arn = aws_lb_target_group.external_target_g.arn
target_id = aws_instance.ec2_1.id
port = 80
}
resource "aws_lb_target_group_attachment" "ec2_2_target_g" {
target_group_arn = aws_lb_target_group.external_target_g.arn
target_id = aws_instance.ec2_2.id
port = 80
}
# ALB
resource "aws_lb" "external_alb" {
name = "external-ALB"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.web_sg.id]
subnets = [aws_subnet.public_subnet1.id,aws_subnet.public_subnet2.id]
tags = {
name = "external-ALB"
}
}
# create ALB listener
resource "aws_lb_listener" "alb_listener" {
load_balancer_arn = aws_lb.external_alb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.external_target_g.arn
}
}
Outputs Block: — outputs.tf
This file contains the definitions for the output values of the resources.
# OUTPUTS
# get the DNS of the load balancer
output "alb_dns_name" {
description = "DNS name of the load balancer"
value = "${aws_lb.external_alb.dns_name}"
}
output "db_connect_string" {
description = "MyRDS database connection string"
value = "server=${aws_db_instance.my_db.address}; database=ExampleDB; Uid=${var.db_username}; Pwd=${var.db_password}"
sensitive = true
}
3. secrets.tfvars :
.gitignore
is a text file, which I created and placed in the root directory called Terraform_Projects
in GiHub Repository. It tells Git which files or folders to be ignored in a project.
I have added secrets.tfvars
in the .gitignore
to be ignored, as it contains the sensitive information for the database username and password. I Used this link Protect Sensitive Input variables to create this file.
# --- root/Terraform_projects/terraform_two_tier_architecture/tfsecrets.tfvars
db_username = "xxxxx"
db_password = "xxxxxxxxxxxxxxxx"
Now that the all the code writing which is neccessary for the project is completed, it’s a time to test it using Terraform.
1. terraform init
terraform init
is the first command that should be run. As the name indicates it initializes a working directory containing Terraform configuration files.
2. terraform fmt
Run terraform fmt
to format the code.
3. terraform validate
Run terraform validate
validates the configuration files in a directory, referring only to the configuration and not accessing any remote services such as remote state, provider APIs, etc. It ensures not to have any syntax errors. It checks for internally consistency, regardless of any provided variables or existing state. It is thus primarily useful for general verification of reusable modules, including the correctness of attribute names and value types..
4. terraform plan
Run terraform plan
to show the execution plan for the resources being created.
5. terraform apply
Run terraform apply
and type yes
when prompted to execute the plan.
Because I am using secrets.tfvars for the sensitive
database username/password variables, Terraform redacts these values from its output when you run a plan, apply, or destroy command.
Instead, I have to use this command
terraform apply -var-file="secret.tfvars"
6. see the outputs
Once the build is complete, you will see these outputs in the end.
7. terraform state list
Run terraform state list
to see the list of all the AWS resources that are created.
8. Also check AWS resources on AWS Console
VPC’s
Subnets
EC2 Instance
RDS Database Instance
Application Load Balancer
9. terraform destroy
Run a terraform destroy apply -var-file="secret.tfvars"
from the terminal to remove all the AWS resources not to incur any AWS charges!
In short, Terraform is great tool to create a simple 2-tier AWS architecture.
You can see all my main.tf, variables.tf and secrets.tfvars files for this project in my GitHub repository.
Top comments (0)