DEV Community

Cover image for Deploying the full three-tier application
eelayoubi
eelayoubi

Posted on • Updated on

Deploying the full three-tier application

In the previous post, we deployed the public side of our infrastructure. In this part, we will deploy the whole stack. Which consists of the followings:

  • 1 VPC
  • 2 Public subnets
  • 2 Private subnets
  • 2 Autoscaling groups
  • 5 Security Groups
  • 2 Load Balancers, (one private, one public)
  • 2 Private EC2 instances (representing our application tier)
  • 2 Public EC2 instances (representing our presentation tier)
  • 2 Nat Gateways (so private instances can connect to the internet)
  • 2 Elastic IP addresses, one for each Nat gateway
  • 1 rds instance

You can check out the source code for this part here

The following is the diagram of our infrastructure by the end of this part:

Image description

Our Application

Our application consists of 2 tiers that will be deployed to the EC2 instances as docker containers:

  • presentation tier (this represents normally the customer facing part of the application, so what the customer interacts with)
  • application tier (this is where we have our business logics)

To keep it simple, our presentation tier simply forward requests to the business tier, that in turn run sql queries on the rds instance.

There is a setup-ecrs.sh script that will build these application images and push them to separate ECrs repositories. You can inspect the script for more details.

To run the script, first run chmod +x setup-ecrs.sh, this will assign executable permission on our script. Then make sure you have aws cli installed and configured, we also need docker to be running, and simply type: ./setup-ecrs.sh

Our new Infrastructure

Building on what we did in part 1, we will add another Load Balancer. This one is an internal LB, that will forward the requests coming from the public presentation instances to the private application instances.

It is also worth mentioning that we created 2 Launch Templates and 2 autoscaling groups (one for the presentation tier, and one for the application tier).

Security Groups

To allow traffic between the load balancers, public, and private instances, we added a security group for each component, these can be viewed here.

Basically the front facing load balancer's security groups, allows http connection from everywhere.
The presentation tier security group allows connection from the front facing LB, and so on.

RDS

We added a module for rds. Which will basically provision an rds instance that the application EC2 instances will query for data.

This module will create 3 resources:

  • A db subnet group
  • A security group
  • An rds instance

The module will export the rds's address that is needed by the application tier.

Gateways

We added 2 Nat Gateways that will allow the EC2 instances in the private subnet (the application tier) to access the internet to download some packages to deploy the docker image.

The presentation tier EC2 instances are public and already have access to the internet, so they can download the needed packages without extra steps.

We assigned an Elastic IP address to each of the Nat Gatweways.

EC2

Our EC2 instances need to access ECR to pull the docker images of our applications. To allow this, we created an instance role, that gives EC2 access to ECR.

As we said already, we have 2 launch templates, one for the business layer and another for the presentation layer.

We created 2 user data files that the launch templates will reference.

Presentation user data script:

#!/bin/bash
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo systemctl enable docker
sudo usermod -a -G docker ec2-user
aws ecr get-login-password --region ${region}  | docker login --username AWS --password-stdin ${ecr_url}
docker run --restart always -e APPLICATION_LOAD_BALANCER=${application_load_balancer} -p 3000:3000 -d ${ecr_url}/${ecr_repo_name}:latest
Enter fullscreen mode Exit fullscreen mode

This script will install docker and run it. It will also login to the ECR registry so we can run our presentation tier docker container.

As you can see we are passing the APPLICATION_LOAD_BALANCER dns name as an environment variable to the docker container, since our application needs it to forward client requests to the business tier.

In the Launch template we are referencing this script:

user_data = base64encode(templatefile("./../user-data/user-data-presentation-tier.sh", {
    application_load_balancer = aws_lb.application_tier.dns_name,
    ecr_url                   = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
    ecr_repo_name             = var.ecr_presentation_tier,
    region                    = var.region
  }))
Enter fullscreen mode Exit fullscreen mode

The ecr_url is the url for the presentation ECR repository that was created when running the setup-ecrs.sh script.

The application launch template passes additional variables to the user data application tier:

user_data = base64encode(templatefile("./../user-data/user-data-application-tier.sh", {
    rds_hostname  = module.rds.rds_address,
    rds_username  = var.rds_db_admin,
    rds_password  = var.rds_db_password,
    rds_port      = 3306,
    rds_db_name   = var.db_name
    ecr_url       = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
    ecr_repo_name = var.ecr_application_tier,
    region        = var.region
  }))
Enter fullscreen mode Exit fullscreen mode

We are passing the RDS instance details, such as username, password, etc ...

Deploying the stack

As mentioned earlier, make sure to run ./setup-ecrs.sh prior to deploying the infrastructure. Since our EC2 instances will download the docker images from the ECRs repositories.

If you haven't initialized the terraform project yet, simply navigate to the terraform folder and run terraform init.

We have a terraform.example.tfvars that holds all our variables values. Change the file name to terraform.tfvars, so when we terraform apply our changes, it is picked up automatically.

Once complete, go ahead and run terraform apply, and type yes to approve the changes.

It might take a while, since we are provisioning a couple of resources, and especially the RDS instance takes some time.

If all goes as planned, the deployment is successful and you get the dns url for the front facing load balancer:

.........
Apply complete! Resources: 39 added, 0 changed, 0 destroyed.

Outputs:

lb_dns_url = "front-end-lb-**********.us-east-1.elb.amazonaws.com"
Enter fullscreen mode Exit fullscreen mode

Testing the Application

Now that we have the Front end Load Balancer's dns, we can test the application by simply visiting front-end-lb-**********.us-east-1.elb.amazonaws.com/ which will printout the a hello message with the server's hostname

Image description

We can call the front-end-lb-**********.us-east-1.elb.amazonaws.com/init endpoint, that forward the request to the presentation layer, which forwards the requests to the application layer (via the internal Load Balancer) that finally creates a table called users, and adds 2 users in the table, you can view the code here:

app.get('/init', async (req, res) => {
  connection.query('CREATE TABLE IF NOT EXISTS users (id INT(5) NOT NULL AUTO_INCREMENT PRIMARY KEY, lastname VARCHAR(40), firstname VARCHAR(40), email VARCHAR(30));');
  connection.query('INSERT INTO users (lastname, firstname, email) VALUES ( "Tony", "Sam", "tonysam@whatever.com"), ( "Doe", "John", "john.doe@whatever.com" );');
  res.send({ message: "init step done" })
})
Enter fullscreen mode Exit fullscreen mode

To view the users table you can call front-end-lb-**********.us-east-1.elb.amazonaws.com/users

Image description

Destroying the infrastructure

Keeping in mind that some of the services will incur some charges, don't forget to clean up the environment, you can do so by running terraform apply -destroy -auto-approve, followed by ./destroy-ecrs.sh which will delete the ECR repositories that store our docker images.

Summary

In this post, we completed creating the infrastructure. We created a main VPC, with public and private subnets, an Internet Gateway, Nat Gateways, , Load Balancers, Autoscaling Groups and so on ...

We saw how to provision and destroy our application using terraform.

However, you might have noticed that the application's lifecycle is tightly coupled with the infrastructure's lifecycle. In the coming blog, we will see how we can split them to their own lifecycles and add some refactoring to our scripts.

Feel free to comment and leave your thoughts 🙏🏻!

Top comments (1)

Collapse
 
ritcher12 profile image
ritch

Why create multiple NAT's if the subnets are all within the same VPC? it costs more money considering each NAT needs an eip.