DEV Community

Revathi Joshi for AWS Community Builders

Posted on • Originally published at Medium

Deploying a Two-Tier Architecture in AWS Using Terraform Modules

Image description

In this article, I am going to show you on how to create and deploy a two-tier architecture with highly available Web servers placed in Private Subnets which has access to the internet through Bastion host placed in Public Subnets using Terraform modules.

Objectives:

  1. Create a highly available two-tier AWS architecture containing the following:
  • 3 Public Subnets

  • 3 Private Subnets

  • Auto Scaling Group for Bastion Host and Web Server

  • Internet-facing Application Load Balancer for Web Server

  1. Use module blocks for ease of use and re-usability.

In my next article, I am going to show you how to Deploy this using Terraform Cloud as a CI/CD tool.

Pre-requisites:

  • AWS user account with admin access, not a root account.

  • Cloud9 IDE, comes with Terraform installed.

  • GitHub Account

Resources Used:

For this article, I used Terraform Documentation (use the navigation to the left to read about the available resources) and Derek Morgan’s course

How I will accomplish the Objectives:

Public subnets will have a

  • Bastion host for having SSH connectivity for the EC2 instances placed in Private Subnets

  • An Auto scaling group with the desired capacity of 1 for high availability

  • Nat Gateway for getting updates from the internet with Elastic IP address.

Private subnets will have

  • EC2 Instances serving as Web servers (named as database servers), which access the internet thru Bastion host

  • An Internet facing Application Load Balancer which directs the traffic to our web servers (named as database servers)

  • An Autoscaling group with the desired capacity of 2 for high availability

For my understanding on how to get this setup working correctly, I used this article as a reference.


In my previous article I have created a 2-tier architecture in AWS using Terraform consisting of all the code in ONLY one parent main.tf file.

This article uses Terraform modules for readability and re-usability.

This infrastructure has a

  • parent main.tf file (root module)

  • child modules for each of the AWS components — compute,
    load-balancing, and networking.

  • The root main.tf file will call these child modules to create
    our infrastructure.

For this project, in Cloud9 environment, we will create a directory structure like this.

Image description

You can see my complete code for this project in my GitHub Repository.

Copy and paste the code into the corresponding files for all the modules.

Note:

  • Be sure to update the key_name (NVirKey) in the root main.tf as your EC2 Key Pair name, without which you cannot test the SSH connectivity to your EC2 instance.

  • Create a .gitignore file. As the name indicates, it will ignore all the default files created automatically when running and testing the code, like terraform.tfstate, terraaform.tfstate.backup files, while pushing your code to Github.

  • Create terraform.tfvars file to include variables for access_ip (your IP address) which is set in your root main.tf which determines the CIDR block that can SSH into our Bastion Host. Ensure that you include Sensitive to be true for these variables.

I am not going to explain the functionality of what each .tf file does. For that, please check out my article on how to create a basic EC2 Terraform module.

I have separated small snippets of code for all the root and child modules into gist files in the Github directory only for the sake of reading and understanding it clearly. I am posting here the same…


Root (Parent) module files:

  • main.tf
# --- root/2_tier_architecture_Terraform_modules/main.tf ---

module "networking" {
  source        = "./networking"
  vpc_cidr      = "10.0.0.0/16"
  access_ip     = var.access_ip
  public_cidrs  = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  private_cidrs = ["10.0.11.0/24", "10.0.12.0/24", "10.0.13.0/24"]
}

module "compute" {
  source         = "./compute"
  public_sg      = module.networking.public_sg
  private_sg     = module.networking.private_sg
  private_subnet = module.networking.private_subnet
  public_subnet  = module.networking.public_subnet
  elb            = module.loadbalancing.elb
  alb_tg         = module.loadbalancing.alb_tg
  key_name       = "NVirKey"
}

module "loadbalancing" {
  source        = "./loadbalancing"
  public_subnet = module.networking.public_subnet
  vpc_id        = module.networking.vpc_id
  web_sg        = module.networking.web_sg
  database_asg  = module.compute.database_asg
}


Enter fullscreen mode Exit fullscreen mode
  • variables.tf
# --- root/2_tier_architecture_Terraform_modules/variables.tf ---


variable "access_ip" {
  type      = string
  sensitive = true
}
Enter fullscreen mode Exit fullscreen mode
  • outputs.tf
# --- root/2_tier_architecture_Terraform_modules/outputs.tf ---

output "alb_dns" {
  value = module.loadbalancing.alb_dns
}
Enter fullscreen mode Exit fullscreen mode
  • install_apache.sh
# --- root/2_tier_architecture_Terraform_modules/install_apache.sh ---

#!/bin/bash

yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "Hello World from $(hostname -f)" > /var/www/html/index.html
Enter fullscreen mode Exit fullscreen mode
  • terraform.tfvars
# --- root/2_tier_architecture_Terraform_modules/terraform.tfvars ---


access_ip = <"your Computer IP Address/32">
Enter fullscreen mode Exit fullscreen mode
  • .gitignore
# Local .terraform directories

**/.terraform/*
**/.terraform.*

# .tfstate files
*.tfstate
*.tfstate.*

# Crash log files
crash.log

# Exclude all .tfvars files, which are likely to contain sentitive data, such as
# password, private keys, and other secrets. These should not be part of version 
# control as they are data points which are potentially sensitive and subject 
# to change depending on the environment.
#
*.tfvars

# Ignore override files as they are usually used to override resources locally and so
# are not checked in
override.tf
override.tf.json
*_override.tf
*_override.tf.json

# Include override files you do wish to add to version control using negated pattern
#
# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan
# example: *tfplan*

# Ignore CLI configuration files
.terraformrc
terraform.rc
Enter fullscreen mode Exit fullscreen mode

compute module

  • main.tf
# --- root/2_tier_architecture_Terraform_modules/compute/main.tf ---

data "aws_ami" "linux" {
  most_recent = true

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }

  owners = ["amazon"]
}

resource "aws_launch_template" "my_bastion" {
  name_prefix            = "my_bastion"
  image_id               = data.aws_ami.linux.id
  instance_type          = var.bastion_instance_type
  vpc_security_group_ids = [var.public_sg]
  key_name               = var.key_name

  tags = {
    Name = "my_bastion"
  }
}

resource "aws_autoscaling_group" "my_bastion" {
  name                = "my_bastion"
  vpc_zone_identifier = tolist(var.public_subnet)
  min_size            = 1
  max_size            = 1
  desired_capacity    = 1

  launch_template {
    id      = aws_launch_template.my_bastion.id
    version = "$Latest"
  }
}

resource "aws_launch_template" "my_database" {
  name_prefix            = "my_database"
  image_id               = data.aws_ami.linux.id
  instance_type          = var.database_instance_type
  vpc_security_group_ids = [var.private_sg]
  key_name               = var.key_name
  user_data              = filebase64("install_apache.sh")

  tags = {
    Name = "my_database"
  }
}

resource "aws_autoscaling_group" "my_database" {
  name                = "my_database"
  vpc_zone_identifier = tolist(var.public_subnet)
  min_size            = 2
  max_size            = 3
  desired_capacity    = 2

  launch_template {
    id      = aws_launch_template.my_database.id
    version = "$Latest"
  }
}

resource "aws_autoscaling_attachment" "asg_attachment_bar" {
  autoscaling_group_name = aws_autoscaling_group.my_database.id
  # elb                    = var.elb
  lb_target_group_arn = var.alb_tg
}
Enter fullscreen mode Exit fullscreen mode
  • outputs.tf
# --- root/2_tier_architecture_Terraform_modules/compute/outputs.tf ---

output "database_asg" {
  value = aws_autoscaling_group.my_database
}
Enter fullscreen mode Exit fullscreen mode
  • variables.tf
# --- root/2_tier_architecture_Terraform_modules/compute/variables.tf ---

variable "public_sg" {}
variable "private_sg" {}
variable "private_subnet" {}
variable "public_subnet" {}
variable "key_name" {}
variable "elb" {}
variable "alb_tg" {}

variable "bastion_instance_type" {
  type    = string
  default = "t2.micro"
}

variable "database_instance_type" {
  type    = string
  default = "t2.micro"
}
Enter fullscreen mode Exit fullscreen mode

loadbalancing module

  • main.tf
# --- root/2_tier_architecture_Terraform_modules/loadbalancing/main.tf ---

resource "aws_lb" "my_lb" {
  name               = "my-loadbalancer"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [var.web_sg]
  subnets            = tolist(var.public_subnet)

  depends_on = [
    var.database_asg
  ]
}

resource "aws_lb_target_group" "my_tg" {
  name     = "my-lb-tg-${substr(uuid(), 0, 3)}"
  protocol = var.tg_protocol
  port     = var.tg_port
  vpc_id   = var.vpc_id
  lifecycle {
    create_before_destroy = true
    ignore_changes        = [name]
  }
}

resource "aws_lb_listener" "my_lb_listener" {
  load_balancer_arn = aws_lb.my_lb.arn
  port              = var.listener_port
  protocol          = var.listener_protocol
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.my_tg.arn
  }
}
Enter fullscreen mode Exit fullscreen mode
  • outputs.tf
# --- root/2_tier_architecture_Terraform_modules/loadbalancing/outputs.tf ---

output "elb" {
  value = aws_lb.my_lb.id
}

output "alb_tg" {
  value = aws_lb_target_group.my_tg.arn
}

output "alb_dns" {
  value = aws_lb.my_lb.dns_name
}
Enter fullscreen mode Exit fullscreen mode
  • variables.tf
# --- root/2_tier_architecture_Terraform_modules/loadbalancing/variables.tf ---

variable "public_subnet" {}
variable "vpc_id" {}
variable "web_sg" {}
variable "database_asg" {}



variable "tg_protocol" {
  default = "HTTP"
}

variable "tg_port" {
  default = 80
}

variable "listener_protocol" {
  default = "HTTP"
}

variable "listener_port" {
  default = 80
}
Enter fullscreen mode Exit fullscreen mode

networking module

  • main.tf
# --- root/2_tier_architecture_Terraform_modules/networking/main.tf ---

resource "random_integer" "random" {
  min = 1
  max = 100
}

resource "aws_vpc" "my_vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "my_vpc-${random_integer.random.id}"
  }
}

resource "aws_subnet" "my_public_subnet" {
  count                   = length(var.public_cidrs)
  vpc_id                  = aws_vpc.my_vpc.id
  cidr_block              = var.public_cidrs[count.index]
  map_public_ip_on_launch = true
  availability_zone       = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1e", "us-east-1f"][count.index]

  tags = {
    Name = "my_public_${count.index + 1}"
  }
}

resource "aws_route_table_association" "my_public_assoc" {
  count          = length(var.public_cidrs)
  subnet_id      = aws_subnet.my_public_subnet.*.id[count.index]
  route_table_id = aws_route_table.my_public_rt.id
}

resource "aws_subnet" "my_private_subnet" {
  count             = length(var.private_cidrs)
  vpc_id            = aws_vpc.my_vpc.id
  cidr_block        = var.private_cidrs[count.index]
  availability_zone = ["us-east-1a", "us-east-1b", "us-east-1c", "us-east-1d", "us-east-1e", "us-east-1f"][count.index]

  tags = {
    Name = "my_private_${count.index + 1}"
  }
}

resource "aws_route_table_association" "my_private_assoc" {
  count          = length(var.private_cidrs)
  subnet_id      = aws_subnet.my_private_subnet.*.id[count.index]
  route_table_id = aws_route_table.my_private_rt.id
}

resource "aws_internet_gateway" "my_internet_gateway" {
  vpc_id = aws_vpc.my_vpc.id

  tags = {
    Name = "my_igw"
  }
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_eip" "my_eip" {

}

resource "aws_nat_gateway" "my_natgateway" {
  allocation_id = aws_eip.my_eip.id
  subnet_id     = aws_subnet.my_public_subnet[1].id
}

resource "aws_route_table" "my_public_rt" {
  vpc_id = aws_vpc.my_vpc.id

  tags = {
    Name = "my_public"
  }
}

resource "aws_route" "default_public_route" {
  route_table_id         = aws_route_table.my_public_rt.id
  destination_cidr_block = "0.0.0.0/0"
  gateway_id             = aws_internet_gateway.my_internet_gateway.id
}

resource "aws_route_table" "my_private_rt" {
  vpc_id = aws_vpc.my_vpc.id

  tags = {
    Name = "my_private"
  }
}

resource "aws_route" "default_private_route" {
  route_table_id         = aws_route_table.my_private_rt.id
  destination_cidr_block = "0.0.0.0/0"
  nat_gateway_id         = aws_nat_gateway.my_natgateway.id
}

resource "aws_default_route_table" "my_private_rt" {
  default_route_table_id = aws_vpc.my_vpc.default_route_table_id

  tags = {
    Name = "my_private"
  }
}

resource "aws_security_group" "my_public_sg" {
  name        = "my_bastion_sg"
  description = "Allow SSH inbound traffic"
  vpc_id      = aws_vpc.my_vpc.id

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = [var.access_ip]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "my_private_sg" {
  name        = "my_database_sg"
  description = "Allow SSH inbound traffic from Bastion Host"
  vpc_id      = aws_vpc.my_vpc.id

  ingress {
    from_port       = 22
    to_port         = 22
    protocol        = "tcp"
    security_groups = [aws_security_group.my_public_sg.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = [aws_security_group.my_web_sg.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "my_web_sg" {
  name        = "my_web_sg"
  description = "Allow all inbound HTTP traffic"
  vpc_id      = aws_vpc.my_vpc.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Enter fullscreen mode Exit fullscreen mode
  • outputs.tf
# --- root/2_tier_architecture_Terraform_modules/networking/outputs.tf ---

output "vpc_id" {
  value = aws_vpc.my_vpc.id
}

output "public_sg" {
  value = aws_security_group.my_public_sg.id
}

output "private_sg" {
  value = aws_security_group.my_private_sg.id
}

output "web_sg" {
  value = aws_security_group.my_web_sg.id
}

output "private_subnet" {
  value = aws_subnet.my_private_subnet[*].id
}

output "public_subnet" {
  value = aws_subnet.my_public_subnet[*].id
}
Enter fullscreen mode Exit fullscreen mode
  • variables.tf
# --- root/2_tier_architecture_Terraform_modules/networking/variables.tf ---

variable "vpc_cidr" {
  type = string
}

variable "public_cidrs" {
  type = list(any)
}

variable "private_cidrs" {
  type = list(any)
}

variable "access_ip" {
  type      = string
  sensitive = true
}
Enter fullscreen mode Exit fullscreen mode

Now run these Terraform commands in the following order:
Run terraform init to initialize Terraform.
Run terraform fmt to format the code.
Run terraform validate to check for any syntax errors.
Run terraform plan to see what resources will be created.
Run terraform apply and type yes when prompted.

At the end, got this

Image description

but our infrastructure is completed.

Image description

Image description

You have to go manually and copy the dns name from Load balancing section

Image description

verify

Image description

Image description

Image description

Image description

No Elastic IP

Image description

No NAT Gateway

Image description

Image description

Image description

Image description

Test ALB

  • Copy the alb_dns output and paste it on to a new Interner browser to test. It will display “Hello World” text with the Private IP of one of the Web (named it as database) servers.

Image description

  • Now refresh the page and now it displays the Private IP address of the 2nd Web (named it as database) server. If you continue to refresh, you will see it switch back and forth.

Image description

Bastion Host & Web (named them as database) servers

showing 1 Bastion host and 2 Web (named them as database) servers.

Image description

Test Bastion host

I cannot see the names of the instances under Name section.

  • Click the box under Name section and look at the security group — my_bastion_sg associated with the instance.

Image description

  • Then click on the box to find out the Public IP of our bastion host

Image description

Using Putty and Putty Agent forwarding (use Pageant — Putty authentication agent), log into the putty session using the key name from the root main.tf (NVirKey) as your EC2 Key Pair name via SSH connection.

Image description

  • ping google.com to test the connection to the internet.

Image description

Test 1st Web (named as database) server

  • Click the box under Name section and look at the security group — my_database_sg associated with the 1st Web instance.

Image description

  • Then click on the box to find out the Private IP of our Web (database) server

Image description

  • Use the private IP from our web (database) servers to SSH into from our Bastion Host.
  • ping google.com to test the connection to the internet.

$ ssh ec2-user@<private ip>

Image description

You can also verify cat /var/www/html/index.html to see the contents of our web page.

Image description

Test 2nd Web (named as database) server

  • Click the box under Name section and look at the security group
    my_database_sg associated with the 2nd Web instance.

  • Then click on the box to find out the Private IP of our 2nd Web (database) server

Image description

  • Use the private IP from our 2nd Web (database) server to SSH into from our Bastion Host.

$ ssh ec2-user@<private ip>

Image description

ping google.com to test the connection to the internet.

Image description

  • You can also verify cat /var/www/html/index.html to see the contents of our web page.

Image description

Cleanup

Run a terraform destroy from the Cloud9 IDE terminal to remove our infrastructure. Type yes when prompted.

Image description

What we have done so far

In Cloud9 environment, we created two Web servers placed in Private Subnets which has access to the internet through a Bastion host placed in Public Subnets with High availability and Load Balancing features.

Top comments (0)