DEV Community

Kuljot Biring
Kuljot Biring

Posted on

Cybr - [LAB] [Challenge] Create a VPC with public and private subnets

AWS VPC

Today, we are going to tackle the lab Cybr - [LAB] [Challenge] Create a VPC with public and private subnets

However, there's a slight twist! We are going to do the lab using Terraform. Why? Because clicking around in a console is great when you initially learning, but not realistic when you are working in more professional environment. Therefore, we are going to use IAC via Terraform in order to complete the lab.

Before we get started, this walk-through assumes you know how to set up Terraform and have a very basic understanding of it.

Let's get started.

The lab gives the following prompt:

Lab Details 👨‍🔬
Length of time: 20 minutes

Difficulty: Easy

We did something very similar in the demo lesson titled “Creating VPCs and Subnets” but I want you to try and complete this scenario as much as possible without looking back at that lesson. Of course, if you’re stuck and you can’t find answers by searching online, I do recommend using the course lesson material to break through. Pretend like you’ve been asked to do this on the job and troubleshoot to the best of your ability. That will help you build practical skills.

Scenario đź§Ş
Create a VPC named cybr-vpc-lab that contains 2 public subnets and 2 private subnets. Each of the public subnets should reside in different availability zones, with a private subnet in each of those zones as well.
Use a CIDR block of /16 for the VPC and CIDRs of /24 for the subnets.
Create an S3 Gateway VPC Endpoint that is connected to both of the private subnets.
While you can use the “VPC and more” option to automate a lot of this, I challenge you to manually create these resources instead to really apply what you’ve learned so far.

Tips:

Remember what makes a public subnet versus a private one
Before you launch a resource, it’s a great idea to verify its pricing first. For example, you should not be launching a NAT Gateway if you want to keep the cost at $0.00 since NAT Gateways cost money — all resources needed for this lab don’t cost anything so that’s a hint you don’t need a NAT Gateway

Before we start coding, keep in mind we want our code to be modularized and follow DRY principles. We are going to create a few files which will look like the directory structure below:

cybr-vpc-lab/
├── main.tf             
├── variables.tf       
├── outputs.tf         
└── terraform.tf 
Enter fullscreen mode Exit fullscreen mode

Let' start by creating a terraform.tf file. This file is going to contain our Terraform provider and version restrictions.

Providing versions is a good practice when writing Terraform as it prevents breaking changes from occurring due to compatibility and stability when using different features and versions.

terraform {
  required_version = ">= 1.3.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}

provider "aws" {
  region = var.aws_region
}
Enter fullscreen mode Exit fullscreen mode

In this file we are ensuring that the Terraform version is at least 1.3.0 or greater.

We are also sourcing the AWS provider from HashiCorp and requiring it to be version 5.0 or greater.

Lastly, we are configuring the provider to use the AWS Region specified in the variables file (which we will create next).

Now that we have our providers taken care of, we are going to create variables.tf. This file will house all of our input variables we will use throughout our code.

Our first variable block will be for the AWS Region we are going to deploy our infrastructure into. Typically, we like to use us-east-1 for this.

variable "aws_region" {
  description = "The AWS region to deploy resources"
  type        = string
  default     = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

Next, we were told that our VPC should be created with a CIDR block of /16. So let's create a variable for that VPC accordingly:

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
  default     = "10.0.0.0/16"
}
Enter fullscreen mode Exit fullscreen mode

We are also to create 2 public subnets in different Availability Zones as well as 2 private subnets in two different availability zones. Both are expected to have /24 CIDRS. We create variables for these as follows:

variable "public_subnet_cidrs" {
  description = "CIDR blocks for public subnets"
  type        = list(string)
  default     = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "private_subnet_cidrs" {
  description = "CIDR blocks for private subnets"
  type        = list(string)
  default     = ["10.0.101.0/24", "10.0.102.0/24"]
}
Enter fullscreen mode Exit fullscreen mode

Note that the default value for both public and private subnet are created using a list of strings.

At this point we are ready to start creating the resource blocks to build the infrastructure required.

Let's write out main.tf file.

As a quick background, resource blocks in Terraform are generally configured as follows:

resource "<PROVIDER>_<RESOURCE_TYPE>" "<RESOURCE_NAME>" {
  argument = some_value

  tags = {
    Name = "some_name"
    Environment = "some_environment"
  }
}
Enter fullscreen mode Exit fullscreen mode

We have the block labelled as a resource type letting Terraform know we are creating a resource.

Next, the provider and resource type signify which provider we are using and what type of resource to create.

This is followed by the managed resource name which is how we can refer to the resource with our configuration.

Within the body of the block we have required arguments, as well as optional tags which help us identify and manage resources.

We begin by first creating the resource block for our VPC.

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "cybr-vpc-labs"
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice that our resource type is aws_vpc indicating to Terraform what type of resource we want to build.

We have the managed resource name of main.

We also have the cidr_block value set to the value of the variable var.vpc_cidr which is being referenced from our variable named vpc_cidr in the variables.tf file.

Both enable_dns_support and enable_dns_hostnames are set to true.

Lastly, we tag our resource with the value cybr-vpc-labs.

Next, we want to ensure that our subnets are spread across multiple Availability Zones (AZs) in the selected region, we use a data source to dynamically fetch the available zones.

data "aws_availability_zones" "available" {}
Enter fullscreen mode Exit fullscreen mode

Now, we create the Internet Gateway which would let resources in our VPC reach the public internet.

resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "cybr-igw"
  }
}
Enter fullscreen mode Exit fullscreen mode

Now we will create the public subnets like so:

resource "aws_subnet" "public" {
  count                   = 2
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "cybr-public-subnet-${count.index + 1}"
  }
}
Enter fullscreen mode Exit fullscreen mode

Looking at our code, we can see we set count to 2 - meaning that two subnets will be created. the vpc_id is set to the ID of our main VPC.

You can also note that the cidr_block variable is assigned from the public_subnet_cidrs variable using the current index from count to select the appropriate CIDR block for each subnet.

The availability_zone is determined from the available zones in the region (us-east-1) that we selected, using the current index to assign each subnet to a different zone.

Furthermore, we can see that map_public_ip_on_launch is set to true which would allow any instances that are launched in this subnet to receive public IP addresses.

We use the aws_availability_zones data source to dynamically fetch available AZs in our selected region.

Lastly, each subnet is tagged with a name utilizing its index: cybr-public-subnet-1 and cybr-public-subnet-2

Now that we have code for our public subnets we need to create public route tables for these subnets. We want to associate the route table to our VPC using vpc_id = aws_vpc.main.id. Next, we create the route definition that directs all output traffic to our internet gateway that we specified earlier using aws_internet_gateway.gw.id. This will enable internet access for resources using this route table. Lastly, we add a tag to more easily track and manage the resource. Here is what our resource block would look like:

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.gw.id
  }

  tags = {
    Name = "cybr-public-rt"
  }
}
Enter fullscreen mode Exit fullscreen mode

Great, we have created our route tables and now we just need to associate our public subnets with our public route table. We do this by using aws_route_table_association as the resource type. We also set the count parameter to 2 so that two associations are created. The subnet_id is set to the ID of the public subnets defined earlier using aws_subnet.public[count.index].id which allows us to reference each subnet based on the current index. Finally, we link to the public route table. Our resource block will be as follows:

resource "aws_route_table_association" "public" {
  count          = 2
  subnet_id      = aws_subnet.public[count.index].id
  route_table_id = aws_route_table.public.id
}
Enter fullscreen mode Exit fullscreen mode

Now that we have coded our public subnets, public route table, and associated them, we need to do the same for the private subnets.

Let's begin by first creating the private subnets which is going to be very similar to how we created the public subnets except we will not have a public IP that can be associated with resources in the private subnets:

resource "aws_subnet" "private" {
  count             = 2
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnet_cidrs[count.index]
  availability_zone = data.aws_availability_zones.available.names[count.index]

  tags = {
    Name = "cybr-private-subnet-${count.index + 1}"
  }
}
Enter fullscreen mode Exit fullscreen mode

Similar to before, we will now create the route table for our private subnets. The key difference here will be that the route table will be used to direct traffic to the S3 endpoint:

resource "aws_route_table" "private" {
  count  = 2
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "cybr-private-rt-${count.index + 1}"
  }
}
Enter fullscreen mode Exit fullscreen mode

The next step is to now associate these private subnets with the private route table we just created:

resource "aws_route_table_association" "private" {
  count          = 2
  subnet_id      = aws_subnet.private[count.index].id
  route_table_id = aws_route_table.private[count.index].id
}
Enter fullscreen mode Exit fullscreen mode

Our last resource block will be creating the S3 Gateway Endpoint which will allow resources in our private subnets to connect to resources in S3 without traversing the internet. We will need to do the following; indicate the appropriate resource type, set the VPC id to that of the main VPC, dynamically construct the service name to reference the S3 service in the region we have chosen, specify the endpoint type, associate the resource with the correct route table, and finally, tag our resource for identification. The configuration will look something like this:

resource "aws_vpc_endpoint" "s3" {
  vpc_id            = aws_vpc.main.id
  service_name      = "com.amazonaws.${var.aws_region}.s3"
  vpc_endpoint_type = "Gateway"

  route_table_ids = [for rt in aws_route_table.private : rt.id]

  tags = {
    Name = "cybr-s3-endpoint"
  }
}
Enter fullscreen mode Exit fullscreen mode

Note that for our route table association, we are using list comprehension to gather the private route tables via their IDs from aws_route_table_private.

For the final stretch of this lab we are going to create an outputs.tf file. The purpose of this file is to define several output values for our Terraform configuration which can be useful for retrieving information about the resources that were created.

We are going to create outputs for; the VPC ID, public subnet IDs, private subnet IDs, and the S3 VPC Endpoint ID. The code will look like this:

output "vpc_id" {
  description = "The ID of the VPC"
  value       = aws_vpc.main.id
}

output "public_subnet_ids" {
  description = "IDs of the public subnets"
  value       = [for subnet in aws_subnet.public : subnet.id]
}

output "private_subnet_ids" {
  description = "IDs of the private subnets"
  value       = [for subnet in aws_subnet.private : subnet.id]
}

output "s3_vpc_endpoint_id" {
  description = "ID of the S3 Gateway VPC Endpoint"
  value       = aws_vpc_endpoint.s3.id
}
Enter fullscreen mode Exit fullscreen mode

Now that we have written all our code we should format out code properly. Let's use a terraform fmt -recursive to make our code more readable.

We also need to initialize our modules with the command: terraform init.

Next, validate the syntax using terraform validate.

We can do a dry run of our code and infrastructure changes using terraform plan.

If we are satisfied with the resources that will be created and how everything looks, we can deploy the resources using terraform apply -auto-approve.

Although the resources we have created should not incur any charges, we are going to remove them using the command terraform destroy and confirm our choice once we ascertain that we are destroying all the resources we have created.

Recap:

We have created the following resources:

Resource Type Count Description
VPC 1 Main container for networking
Public Subnets 2 Spread across AZs
Private Subnets 2 Spread across AZs
Route Tables 3 1 public, 2 private
Subnet Associations 4 2 public, 2 private
S3 VPC Endpoint 1 Connects private subnets to S3
Outputs 4 VPC ID, subnet IDs, endpoint ID

We are using Terraform vs clicking around in the console to create these resources since using Terraform to manage infrastructure allows for repeatability and automation. With Terraform, we can define our infrastructure as code, making it easy to version, share, and reproduce environments consistently, while manual configurations can lead to errors and inconsistencies.

For the complete version of all the files, see the links list below:

Top comments (0)