DEV Community

Cover image for IaC Provisioning A Three-Tier Application on AWS
GERALD IZUCHUKWU
GERALD IZUCHUKWU

Posted on

IaC Provisioning A Three-Tier Application on AWS

IaC Provisioning A Three-Tier Application on AWS

IaC means Infrastructure as code and for a while I struggled to understand this concept, and when it seemed like I was starting to get it, I started confusing it with platform as code. IaC allows you to build, change, and manage your infrastructure in a safe, consistent, and repeatable way by defining resource configurations that you can version, reuse, and share. Infrastructure as code is basically provisioning or creating your resources through code or configuration files, in other words, automating the process of creating them in one click, instead of creating them singly through GUI or CLI, and Terraform is a tool that helps us to do that.

Terraform is an open-source tool that efficiently helps us to provision infrastructure, it is owned by Hashicorp and it also uses its own language HCL (Hashicorp Configuration Language), for the configuration of these infrastructures. Terraform can be used for several things but since it is only an IaC provisioning tool, it also has its limitations.

This write-up focuses on using Terraform to provision a three-tier application on AWS. There are a lot of three-tier apps out there, maybe more detailed than this but what I intend to do for this is to explain every concept used here, to help me understand further as it takes a while for me to grasp a concept fully and to help people like

This writeup will use the following technologies

Network

  1. VPC
  2. Subnet
  3. Route Table
  4. Internet Gateway
  5. Nat Gateway
  6. Security Groups

Compute

  1. launch template
  2. Key pair
  3. Elastic Load Balancer
  4. Target Groups
  5. Auto Scaling Groups

Database

  1. RDS Database
  2. Subnet Groups

Others

  1. IAM Role
  2. S3 Bucket

We are going to break this down into steps

STEP 1: Upload our static files and logic code to Amazon S3 Bucket

To do this, we need to create an S3 bucket, create two folders, and name them frontend and backend. In the frontend folder, upload all our static files as well as our nginx.conf file. In the backend folder, we upload all our logic code files

STEP 2: Configure AWS auth

You can use various ways to configure AWS authentication, I will walk you through using IAM

  • Visit AWS Management Console
  • Navigate to IAM
  • Add User, give the user a name
  • Attach AdminstratorAccess Policy to the user
  • Review and create user
  • After creating a user, select the user and navigate to security credentials, scroll down to access keys, and click create access keys
  • Select a use case, add a description and the Access key and Secret Access key will be created, download the CSV and save it in a secure folder
  • Now set the environment variables
export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_REGION="your_region
Enter fullscreen mode Exit fullscreen mode

STEP 3: Set up your terraform

I won't assume that you already have Terraform installed, if you do, that's fine, if you don't follow the steps below.

  • Visit Teraform Download File
  • Select the download configuration for your operating system
  • Test if the download was successful using "terraform -version"
  • Create a folder for this terraform project, call it whatever you want. I will call mine "three-tier-app-projects" change directory into the folder cd three-tier-app-projects and create a file named main.tf
  • Add the terraform block and aws provider block into the file and save on the CLI, run the command terraform init This command takes a while, so be patient
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.2.0"
}

provider "aws" {
  region  = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

Providers are plugins that help manage and create your resources in IaC. providers could include aws, docker, nginx, etc.
N/B For a simple project such as this, we can simply initialize terraform with only the provider block, without the terraform block

STEP 4: Setup Network Aspect

  • Create a terraform.tfvars file. This file is used to store all our static variables
  • Create the VPC, which is a network house that houses all our resources. Store the vpc_cidr block in the >terraform.tfvars, access it using the variable keyword before calling it in the aws_vpc block. Your main.tf and terraform.tfvars files should look like this
provider "aws" {
  region  = "us-east-1"
}

variable "env_prefix" {}
variable "avail_zone" {}
variable "vpc_cidr" {}

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name = "${var.env_prefix}_vpc"
  }
}
Enter fullscreen mode Exit fullscreen mode
env_prefix    = "three-tier-demo"
avail_zone    = ["us-east-1a", "us-east-1b"]
vpc_cidr      = "10.0.0.0/16"
Enter fullscreen mode Exit fullscreen mode
  • Run the following command terraform fmt terraform validate terraform plan terraform apply

terraform fmt formats the file, to follow a particular structure
terraform validate checks if the configuration is syntactically valid and internally consistent
terraform plan checks and outputs all the resources that will be added by the configuration
terraform apply goes ahead to apply the configuration after the prompt is entered

If everything goes well, a VPC named three-tier-demo_vpc will be provisioned in our us-east-1 zone

STEP 5: Let's finish creating our Network resources

  • Create Subnets for Public and Private resources in two availability zones for redundancy
resource "aws_subnet" "public" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index)
  availability_zone       = var.avail_zone[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.env_prefix}_public_subnet-${count.index + 1}"
  }
}

resource "aws_subnet" "private" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index + 2)
  availability_zone       = var.avail_zone[count.index]
  map_public_ip_on_launch = false

  tags = {
    Name = "${var.env_prefix}_private_subnet-${count.index + 1}"
  }
}

resource "aws_subnet" "db_private" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index + 4)
  availability_zone       = var.avail_zone[count.index]
  map_public_ip_on_launch = false

  tags = {
    Name = "${var.env_prefix}_db_private_subnet-${count.index + 1}"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Create Internet Gateway and Nat Gateway
resource "aws_internet_gateway" "this" {
  vpc_id = aws_vpc.main.id
  tags = {
    Name = "${var.env_prefix}_igw"
  }
}
resource "aws_nat_gateway" "nat" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public[0].id

  tags = {
    Name = "${var.env_prefix}_nat_gateway"
  }
}

resource "aws_eip" "nat" {
  vpc = true
  tags = {
    Name = "${var.env_prefix}_eip_nat"
  }
}
Enter fullscreen mode Exit fullscreen mode

N/B: NatGW and ElasticIP (eip) are paid for, there is no free tier available for this. NatGW costs $0.05/hr and EIP costs $0.005/hr it is not attached to an ec2 instance

  • Create a Public and Private RouteTable and associate them to a subnet
resource "aws_route_table" "public_route_table" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.this.id
  }


  tags = {
    Name = "${var.env_prefix}_public_route_table"
  }
}

resource "aws_route_table_association" "public" {
  count          = 2
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public_route_table.id
}

resource "aws_route_table" "private_route_table" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_nat_gateway.nat.id
  }

  tags = {
    Name = "${var.env_prefix}_private_route_table"
  }
}

resource "aws_route_table_association" "private" {
  count          = 2
  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private_route_table.id
}

resource "aws_route_table_association" "db_private" {
  count          = 2
  subnet_id      = aws_subnet.db_private[count.index].id
  route_table_id = aws_route_table.private_route_table.id
}

Enter fullscreen mode Exit fullscreen mode

What did we just do?
We created a VPC for all our resources to live in with a CIDR block of 10.0.0.0/16
We created subnets which are like smaller houses in the VPC that house one or more resources.
The public subnets (internet-facing) have an internet gateway route configured in the route table.
The private subnets (internal facing) have NAT gateway route configured in the route table.
The elastic IP is for Controlled Access. With a NAT Gateway, instances in a private subnet do not have direct public IPs, enhancing security. By using an EIP with the NAT Gateway, you maintain control over how and when outbound internet access is granted without exposing private instances directly to the internet

STEP 6: Security

For security, we can set security on the subnet level(NACL) or instance level(Security groups). For this project, we will be using security groups. We need to create Security Groups for the Elastic Load Balancer, the Instances, and the RDS instance. To make our solution highly secure, we allow the web tier instance to receive traffic only from the external load balancer. We also allow the app tier instances to receive HTTP traffic only from the internal load balancer. Since the web tier and app tier has to communicate, we allow the internal load balancer to allow HTTP traffic only from the web-tier instances.

  • Create SG for external Load Balancer, that allows HTTP traffic on port 80 and HTTPS on port 443
resource "aws_security_group" "externalLoadBalancerSG" {
  vpc_id = aws_vpc.main.id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [var.my_ip_address]
  }

  ingress {
    from_port   = 443 // https
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = [var.my_ip_address]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "01. External LoadBalancer Security Group"
  }

}
Enter fullscreen mode Exit fullscreen mode
  • Create SG for Web tier Instance, that allows traffic on Port 80 only from the external load balancer security group. You can add ssh security group rule too, but it's best to use the SSM ssh manager to access the instance terminal
resource "aws_security_group" "webserverSG" {
  vpc_id = aws_vpc.main.id
  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.externalLoadBalancerSG.id]

  }
    ingress {
    from_port       = 22
    to_port         = 22
    protocol        = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "02. Web Server Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Create a security group for the internal load balancer, that allows HTTP traffic on port 80 only from the web-tier security groups
resource "aws_security_group" "internalLoadBalancerSG" {
  vpc_id = aws_vpc.main.id
  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.webserverSG.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "03. Internal Load Balancer Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Create a security group for app-tier instances that allows HTTP traffic on port 9662 (our NodeJS server port) only from the internal load balancer security group
resource "aws_security_group" "appserverSG" {
  vpc_id = aws_vpc.main.id

  ingress {
    from_port       = 9662
    to_port         = 9662
    protocol        = "tcp"
    security_groups = [aws_security_group.internalLoadBalancerSG.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "04. App Server Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Finally, we will create a security group for our database instances that allows inbound traffic on port 3306 (AURORA/MYSQL) port
resource "aws_security_group" "dbserverSG" {
  vpc_id = aws_vpc.main.id

  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.appserverSG.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "04. Database Server Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode

STEP 7: Provision Web Tier Instances

  • IAM Profile

    We will need to create an IAM Profile with two roles - S3ReadOnly and SSManagerProfile. This profile will be attached to the instances and it will enable them to read the uploaded files on S3 as well as connect to the instance's terminal through SSMManagedInstanceCore rather than opening port 22 for ssh-ing

  • Launch Template

    Since we are trying to automate as much as possible and we want auto-scaling of instances, we use auto-scaling group. Auto-scaling group works with launch templates, so we create a launch template with keypair for ssh, entry-script, and attach the IAM Profile created earlier

  • Load Balancer

    Next, we create a load balancer, we want this load balancer to be able to receive traffic from outside the VPC, so we make it internet-facing. The job of the load balancer is to distribute traffic evenly across all instances in order not to overwhelm a particular instance. For the load balancer to know the instances to evenly distribute traffic to, the instances must be added to the load balancer's target group. The load balancer also checks the health of an instance before it routes to it. The part of the ALB that is responsible for listens for incoming traffic requests, processes them, and routes them to the target group is called the listener. It listens on a port, 80 or 443, and routes these traffic requests based on the rules specified.

  • Auto-Scaling Group

    The auto-scaling group is used to scale instances up or down based on traffic demand. For this to work, we specify the launch template, the minimum and maximum amount of instances to create, the subnets to launch these instances in, the target group to place these instances in for the ALB, as well as health check. We also set an autoscaling policy to scale up or down the instances.

STEP 8: Provision App Tier Instances

Now for the backend configuration, we do something similar but with little but extremely important changes.

  • IAM Profile

    We also attach the same IAM profile we created earlier to the backend instances

  • Launch Template

    The launch template configuration is the same as earlier but the entry-script should be different as we want different commands to run in our backend instances

  • Load Balancer

    This time, we create an internal load balancer(not internet facing) as we don't want internet traffic to hit our instances directly.
    This load balancer also listens on port 80

  • Auto-Scaling Group

    The Autoscaling group configuration is also similar to the previous, the only difference is the subnets where the instances should be placed in is the private subnets, and the target group should be that of the internal ALB.

STEP 9: Provision Database Tier Instances

  • Database Subnet Group

    First, we create a subnet group in two of the private instances already created earlier

  • DB Instance

    Then we create our DB instances in those DB subnet groups. We specify the name, the engine, the username and password of the default user, the security group, the Availability Zone, (do not use Multi-AZ as that will incur costs outside the free-tier. You can deploy the DB instance in one AZ and deploy a read replica in another AZ. Data from the read replica that way, we can still achieve redundancy

Step 10: Run Terraform Commands

We need to run the following command to test our configuration

  • terraform fmt
  • terraform validate
  • terraform plan -output=tf.plan
  • terraform apply tf.plan

If there are any errors, terraform will update you on these errors so you can correct them.
After successfully provisioning the infrastructure, visit the external load balancer DNS to view the hosted website

The GitHub Repository for this Project

Refs:

Top comments (0)