DEV Community

Cover image for Provisioning A Three-Tier Application on AWS using Infrastructure-As-Code (IaC)
GERALD IZUCHUKWU
GERALD IZUCHUKWU

Posted on • Edited on

Provisioning A Three-Tier Application on AWS using Infrastructure-As-Code (IaC)

This is a general architecture of what we will build. In case it isn't clear enough, please click here
Project HLD

IaC stands for Infrastructure as code and for a while I struggled to understand this concept, and when it seemed like I was starting to get it, I started confusing it with some other thing. IaC allows you to build, change, and manage your infrastructure in a safe, consistent, and repeatable way by defining resource configurations that you can version, reuse, and share. Infrastructure as code is basically provisioning or creating your resources through code or configuration files, in other words, automating the process of creating them in one click, instead of creating them singly through GUI or CLI, and Terraform is a tool that helps us to do that. An AWS cloud native used to also achieve IaC is AWS CLoudFormation and you can learn more about it here

Terraform is an open-source tool that efficiently helps us to provision infrastructure. It is owned by Hashicorp and uses its own language HCL (Hashicorp Configuration Language), for the configuration of these infrastructures. Terraform can be used for several things but since it is only an IaC provisioning tool, it also has its limitations.

This article focuses on using Terraform to provision a three-tier application on AWS. There are a lot of three-tier apps out there, maybe more detailed than this but what I intend to do for this is to explain every concept used here, to help me understand further as it takes a while for me to grasp a concept fully and to help people like me.

This writeup will use the following technologies

Network

  1. VPC
  2. Subnet
  3. Route Table
  4. Internet Gateway
  5. Nat Gateway
  6. Security Groups

Compute

  1. launch template
  2. Key pair
  3. Elastic Load Balancer
  4. Target Groups
  5. Auto Scaling Groups

Database

  1. RDS Database
  2. Subnet Groups

Other AWS Resources

  1. IAM Role
  2. S3 Bucket
  3. AWS SNS
  4. AWS CloudWatch

Other Non-AWS Resources

  1. Nginx
  2. Docker
  3. Nodejs

We are going to break this down into steps

STEP 1: Upload your static files and logic code to Amazon S3 Bucket

To do this, we need to create an S3 bucket, create two folders, and name them frontend and backend (can be named otherwise). In the frontend folder, upload all your static files as well as your nginx.conf file. In the backend folder, we upload all our logic code files. A link to the repo can be found here

S3 Bucket folder

Dont forget to name your bucket something unique as bucket names are specific per region

STEP 2: Set up IAM (User, Roles and Policies)

You can use various ways to configure AWS authentication, I will walk you through using IAM

An IAM user represents a specific person or application that interacts with resources. This is the "user" that allows Terraform to perform certain tasks in our AWS account. A User's action is defined by the policies attached to that user. An IAM User is quite different from an IAM Role because, a user is mostly configured to perform long-term tasks as it has permanent credentials, whereas an IAM role is for short-term and immediate functions as its credentials are temporary. To create an IAM user for CLI, follow the following steps:

  • Visit AWS Management Console
  • Navigate to IAM
  • Add User, give the user a name
  • Attach AdminstratorAccess Policy to the user
  • Review and create user
  • After creating a user, select the user and navigate to security credentials, scroll down to access keys, and click create access keys
  • Select a use case, add a description and the Access key and Secret Access key will be created, download the CSV and save it in a secure folder
  • Now set the environment variables
export AWS_ACCESS_KEY_ID="your_access_key"
export AWS_SECRET_ACCESS_KEY="your_secret_key"
export AWS_REGION="your_region
Enter fullscreen mode Exit fullscreen mode

Note: The reason why I gave this user Administrator Access policy privilege is because the user will interact with so many resources. The best practice is to compile all the rules the user will need to one policy and attach that policy to the user

  1. Set up IAM Role We will need an IAM role to perform two basic functions, therefore we create an IAM role and attach two policies to it. The first policy will be for our instances to be able to read our uploaded code files from Amazon S3. The second policy we will need is the SSM managed instance core policy, to be able to connect to our instance instead of opening an SSH port. Both policies already exist in AWS so there will be no need to create them. The following steps are to be followed to create the IAM role with these two policies
  • Navigate to IAM, on the sidebar, click on roles
  • Click on AWS Service (since we are using it for EC2)
  • Choose EC2 for the use case, click next
  • Add permissions, search for "AmazonS3ReadOnlyAccess" and AmazonSSMManagedInstanceCore, check them, and click next.
  • Give the role a name and click "create role"

STEP 3: Set up your terraform

I won't assume that you already have Terraform installed, if you do, that's fine, if you don't follow the steps below.

  • Visit Teraform Download File
  • Select the download configuration for your operating system
  • Test if the download was successful using terraform -version
  • Create a folder for this terraform project, call it whatever you want. I will call mine "three-tier-app-projects" change directory into the folder cd three-tier-app-projects and create a file named main.tf
  • Add the terraform block and aws provider block into the file and save on the CLI, run the command terraform init This command takes a while, so be patient
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.16"
    }
  }

  required_version = ">= 1.8.0"
}

provider "aws" {
  region  = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

Providers are plugins that help manage and create your resources in IaC. providers could include aws, docker, nginx, etc.
N/B For a simple project such as this, we can simply initialize terraform with only the provider block, without the terraform block

STEP 4: Setup VPC Network Aspect

  • Create a terraform.tfvars file. This file is used to store all our static variables
  • Create the VPC, which is a network house that houses all our resources. Store the vpc_cidr block in the terraform.tfvars, access it using the variable keyword before calling it in the aws_vpc resource block. Your main.tf and terraform.tfvars files should look like this
provider "aws" {
  region  = "us-east-1"
}

variable "env_prefix" {}
variable "avail_zone" {}
variable "vpc_cidr" {}

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name = "${var.env_prefix}_vpc"
  }
}
Enter fullscreen mode Exit fullscreen mode

In the terraform.tfvars file, we have

env_prefix    = "three-tier-demo"
avail_zone    = ["us-east-1a", "us-east-1b"]
vpc_cidr      = "10.0.0.0/16"
Enter fullscreen mode Exit fullscreen mode
  • Run the following command -terraform fmt -terraform validate -terraform plan -terraform apply

terraform fmt formats the terraform files in the current directory it was ran, to follow a particular structure,
terraform validate checks if the configuration is syntactically valid and internally consistent,
terraform plan checks and outputs all the resources that will be added by the configuration and
terraform apply goes ahead to apply the configuration after the prompt-"yes" is entered. These are some of the most used terraform commands.

If everything goes well, a VPC named three-tier-demo_vpc will be provisioned in our us-east-1 zone

STEP 5: Let's finish creating our Network resources

  • Create Subnets for Public and Private resources in two availability zones for redundancy
resource "aws_subnet" "public" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index)
  availability_zone       = var.avail_zone[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.env_prefix}_public_subnet-${count.index + 1}"
  }
}

resource "aws_subnet" "private" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index + 2)
  availability_zone       = var.avail_zone[count.index]
  map_public_ip_on_launch = false

  tags = {
    Name = "${var.env_prefix}_private_subnet-${count.index + 1}"
  }
}

resource "aws_subnet" "db_private" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = cidrsubnet(var.vpc_cidr, 8, count.index + 4)
  availability_zone       = var.avail_zone[count.index]
  map_public_ip_on_launch = false

  tags = {
    Name = "${var.env_prefix}_db_private_subnet-${count.index + 1}"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Create Internet Gateway and Nat Gateway
resource "aws_internet_gateway" "this" {
  vpc_id = aws_vpc.main.id
  tags = {
    Name = "${var.env_prefix}_igw"
  }
}
resource "aws_nat_gateway" "nat" {
  allocation_id = aws_eip.nat.id
  subnet_id     = aws_subnet.public[0].id

  tags = {
    Name = "${var.env_prefix}_nat_gateway"
  }
}

resource "aws_eip" "this" {
  vpc = true
  tags = {
    Name = "${var.env_prefix}_eip_nat"
  }
}
Enter fullscreen mode Exit fullscreen mode

N/B: NatGW and ElasticIP (eip) are paid for, there is no free tier available for this. NatGW costs $0.05/hr and EIP costs $0.005/hr when it is not attached to an ec2 instance

  • Create a Public and Private RouteTable and associate them to a subnet
resource "aws_route_table" "public_route_table" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.this.id
  }


  tags = {
    Name = "${var.env_prefix}_public_route_table"
  }
}

resource "aws_route_table_association" "public" {
  count          = 2
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public_route_table.id
}

resource "aws_route_table" "private_route_table" {
  vpc_id = aws_vpc.main.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_nat_gateway.nat.id
  }

  tags = {
    Name = "${var.env_prefix}_private_route_table"
  }
}

resource "aws_route_table_association" "private" {
  count          = 2
  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private_route_table.id
}

resource "aws_route_table_association" "db_private" {
  count          = 2
  subnet_id      = aws_subnet.db_private[count.index].id
  route_table_id = aws_route_table.private_route_table.id
}

Enter fullscreen mode Exit fullscreen mode

What did we just do?
We created a VPC for all our resources to live in with a CIDR block of 10.0.0.0/16
We created subnets which are like smaller houses in the VPC that house one or more resources.
The public subnets (internet-facing) have an internet gateway route configured in the route table, to allow internet access.
The private subnets (internal facing) have NAT gateway route configured in the route table to allow internet access when needed.
The elastic IP is for Controlled Access, it allows us to use a particularly defined public IP address that doesn't change even if the resource it is attached to goes down. With a NAT Gateway, instances in a private subnet do not have direct public IPs, enhancing security. By using an EIP with the NAT Gateway, you maintain control over how and when outbound internet access is granted without exposing private instances directly to the internet

STEP 6: Security

For security, we can set security on the subnet level(NACL) or instance level(Security groups). For this project, we will be using security groups. We need to create Security Groups for the Elastic Load Balancer, the Instances, and the RDS instance. To make our solution highly secure, we allow the web tier instance to receive traffic only from the external load balancer. We also allow the app tier instances to receive HTTP traffic only from the internal load balancer. Since the web tier and app tier has to communicate, we allow the internal load balancer to allow HTTP traffic only from the web-tier instances.

  • Create Security Group for external Load Balancer, that allows HTTP traffic on port 80 and HTTPS on port 443
resource "aws_security_group" "externalLoadBalancerSG" {
  vpc_id = aws_vpc.main.id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = [var.my_ip_address]
  }

  ingress {
    from_port   = 443 // https
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = [var.my_ip_address]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "01. External LoadBalancer Security Group"
  }

}
Enter fullscreen mode Exit fullscreen mode
  • Create SG for Web tier Instance, that allows traffic on Port 80 only from the external load balancer security group. You can add an SSH security group rule too, but it's best to use the SSM manager to access the instance terminal. If you prefer ssh, uncomment the ssh security group rule
resource "aws_security_group" "webserverSG" {
  vpc_id = aws_vpc.main.id
  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.externalLoadBalancerSG.id]

  }
    #ingress {
    #from_port       = 22
    #to_port         = 22
    #protocol        = "tcp"
    #cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "02. Web Server Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Create a security group for the internal load balancer, that allows HTTP traffic on port 80 only from the web-tier security groups
resource "aws_security_group" "internalLoadBalancerSG" {
  vpc_id = aws_vpc.main.id
  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.webserverSG.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "03. Internal Load Balancer Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Create a security group for app-tier instances that allows HTTP traffic on port 9662 (our NodeJS server port) only from the internal load balancer security group. If you will be needing ssh access, you can add the ssh rule like the one we added in the web tier security group
resource "aws_security_group" "appserverSG" {
  vpc_id = aws_vpc.main.id

  ingress {
    from_port       = 9662
    to_port         = 9662
    protocol        = "tcp"
    security_groups = [aws_security_group.internalLoadBalancerSG.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "04. App Server Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Finally, we will create a security group for our database instances that allow inbound traffic on port 3306 (AURORA/MYSQL) port
resource "aws_security_group" "dbserverSG" {
  vpc_id = aws_vpc.main.id

  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.appserverSG.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "04. Database Server Security Group"
  }
}
Enter fullscreen mode Exit fullscreen mode

The created security groups as seen on AWS console
Security Groups

STEP 7: Provision Web Tier Instances

  • IAM Role

    We will be needing the IAM Role we created earlier with two policies - S3ReadOnly and SSManagerProfile. This role will be attached to the instances and it will enable them to read the uploaded files on S3 as well as connect to the instance's terminal through SSMManagedInstanceCore rather than opening port 22 for SSH-ing. We already created this role via the console but it can also be created using Terraform

  • Launch Template

    Since we are trying to automate as much as possible and we want auto-scaling of instances, we use auto-scaling group. Auto-scaling group works with launch templates, so we create a launch template with keypair for SSH, entry-script i.e. a list of commands the instances will run after it is launched, and attach the IAM Profile created earlier. The frontend entry-script is here

  • Load Balancer

    Next, we create a load balancer, we want this load balancer to be able to receive traffic from outside the VPC, so we make it internet-facing. The job of the load balancer is to distribute traffic evenly across all instances in order not to overwhelm a particular instance. For the load balancer to know the instances to distribute traffic evenly, the instances must be added to the load balancer's target group. The load balancer also checks the health of an instance before it routes traffic to it. The part of the ALB responsible for listening for incoming traffic requests, processing them, and routing them to the target group is called the listener. It listens on port 80 or 443 and routes these traffic requests based on the rules specified.

  • Auto-Scaling Group

    The auto-scaling group is used to scale instances up or down based on traffic demand. For this to work, we specify the launch template, the minimum and maximum amount of instances to create, the subnets to launch these instances in, the target group to place these instances in for the ALB, as well as the health check. We also set an autoscaling policy to scale up or down the instances.
    The link to the nginx server configuration is here

STEP 8: Provision App Tier Instances

Now for the backend configuration, we do something similar but with little but extremely important changes.

  • IAM Role

    We also attach the same IAM role we created earlier to the backend instances

  • Launch Template

    The launch template configuration is the same as earlier but the entry-script should be different as we want different commands to run in our backend instances. The backend entry-script is here

  • Load Balancer

    This time, we create an internal load balancer(not internet facing) as we don't want internet traffic to hit our instances directly.
    This load balancer also listens on port 80

  • Auto-Scaling Group

    The Autoscaling group configuration is also similar to the previous, the only difference is the subnets where the instances should be placed in is the private subnets, and the target group should be that of the internal ALB.
    The link to the backend express server configuration is here

We can see our created web-tier and app-tier instances to serve our frontend and backend files respectively

Created Instances

STEP 9: Provision Database Tier Instances

  • Database Subnet Group

    First, we create a subnet group in two of the private instances already created earlier

  • DB Instance

    Then we create our DB instances in those DB subnet groups. We specify the name, the engine, the username and password of the default user, the security group, and the Availability Zone, (do not use Multi-AZ as that will incur costs outside the free-tier. You can deploy the DB instance in one AZ and deploy a read replica in another AZ. Data from the read replica that way, we can still achieve redundancy

Step 10: Run Terraform Commands

We need to run the following command to test our configuration

  • terraform fmt
  • terraform validate
  • terraform plan -output=tf.plan
  • terraform apply tf.plan

If there are any errors, terraform will update you on these errors so you can correct them.
After successfully provisioning the infrastructure, visit the external load balancer DNS to view the hosted website

Navigate to EC2 page on AWS, and scroll the sidebar down for LoadBalancer.
Load Balancer

Select the external load balancer, copy the DNS, and paste this into a browser to see the website and interact with the three-tier application. If you used my frontend code, it should look like this

Display Page

Up Next:

  • We configure Amazon CloudWatch and SNS to notify us when there is any change in our project and Amazon Route53 for our DNS
  • We modularize the terraform configuration, which is a best practice.

The GitHub Repository for this Project

Refs:

Top comments (2)

Collapse
 
kalada_epelle_be0b376e3e2 profile image
Kalada Epelle

Thanks for this, it’s much clearer to me now.

Collapse
 
beolumn_oguguam_d6891f9bd profile image
BEOLUMN OGUGUAM

Well simplified. Thanks