DEV Community

Kuljot Biring
Kuljot Biring

Posted on

Cybr - [LAB] [Challenge] Configure security groups and NACLs to specific requirements

In this walk-through we are going to solve the lab Configure security groups and NACLs to specific requirements created by Cybr.

However, the twist is we are going to solve this using Terraform!

Let's have a look at the requirements:


Lab Details 👨‍🔬
Length of time: ~30 minutes
Cost: $0
Difficulty: Moderate

Scenario 🧪

Create four separate security groups:

Security Group #1
Name it Web Servers
Provide open access to two commonly used ports for application servers: 80 and 443
This open access should work for both IPv4 and IPv6

Security Group #2
Name it App Servers
Provide open access for instances in the Web Servers SG to be able to communicate with your app servers

Security Group #3
Name it IT Administration
Provide open access for your organization’s IT admins to be able to SSH and/or RDP into the cloud instances

Your IT admins should only ever have access those instances from the following two IP addresses:
172.16.0.0
192.168.0.0

Security Group #4
Name it Database
Provide open access for application servers to be able to communicate with your MySQL database

Create two separate NACLs:

NACL #1
Name it Public Subnets
Provide open access to allow all traffic that would be allowed by the security groups for resources that would be launched in the public subnets

NACL #2
Name it Private Subnets
Provide open access to allow all traffic that would be allowed by the security groups for resources that would be launched in the private subnets


Let's get started. We are going to create four files:

  • terraform.tf
  • variables.tf
  • main.tf
  • outputs.tf

We start off with out terraform.tf file.

First, we are going to add out Terraform block like so:

terraform {
  required_version = ">= 1.3.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This block performs two functions:

  • requires that the Terraform version must be 1.3.0 or higher.
  • Declares the AWS provider by specifying its source (from HashiCorp) and requires that its version be 5.0 or higher.

By specifying versions, we also ensure compatibility with specific Terraform and provider versions.

Next, in this same file we add the provider block:

provider "aws" {
  region = var.aws_region
}
Enter fullscreen mode Exit fullscreen mode

This block configures the AWS provider. It sets the region to use via a variable which we will set soon in a variables.tf file. By using a variable, we can re-configure our region without hardcoding it, allowing for easy updates.

Let's now create out variables.tf file. Our purpose of creating this file is to; define input variables in one place, allow easy modification of infrastructure settings, allow our code to be flexible and reusable, supports dynamic configuration.

variable "aws_region" {
  description = "The AWS region to deploy resources into"
  type        = string
  default     = "us-east-1"
}

variable "vpc_cidr" {
  description = "CIDR block for the VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "it_admin_ips" {
  description = "List of IPs for IT admin access"
  type        = list(string)
  default     = ["172.16.0.0/16", "192.168.0.0/16"]
}

variable "web_sg_name" {
  type    = string
  default = "web-server"
}

variable "app_sg_name" {
  type    = string
  default = "app-servers"
}

variable "it_sg_name" {
  type    = string
  default = "it-administrator"
}

variable "db_sg_name" {
  type    = string
  default = "database"
}

variable "public_nacl_name" {
  type    = string
  default = "public-subnets"
}

variable "private_nacl_name" {
  type    = string
  default = "private-subnets"
}
Enter fullscreen mode Exit fullscreen mode

In our code we are primarily setting the name of the various resources as variables per the lab instructions.

For aws_region, we are setting the region to us-east-1. For vpc_cidr, we set the VPC CIDR block. We also set the list of IP for the it_admin_ips security group as mentioned by the lab specifications. Lastly, the rest of the variable names correspond to the names that the lab requires us to have for our resources.

Now, we will being writing our main.tf file and creating the resource blocks.

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name = "cybr-lab"
  }
}
Enter fullscreen mode Exit fullscreen mode

We begin with creating our VPC resource block giving it the name 'main'. We set the CIDR block configuration to utilize the variable we set in variables.tf to define the CIDR range of the VPC for our lab. The DNS settings are to enable DNS resolution as well as allowing public DNS hostnames. Lastly, we tag the resource to allow for tracking and identifying the resource.

Our next resource we will build is the Webserver Security Group. Let's have a look at what this resource block will look like.

resource "aws_security_group" "web_servers" {
  name        = var.web_sg_name
  description = "Allow HTTP and HTTPs from anywhere (IPv4 and IPv6)"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port         = 80
    to_port           = 80
    protocol          = "tcp"
    ipv6_cidr_blocks  = ["::/0"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port         = 443
    to_port           = 443
    protocol          = "tcp"
    ipv6_cidr_blocks  = ["::/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = var.web_sg_name
  }
}
Enter fullscreen mode Exit fullscreen mode

Looking at our resource, we'll start from the top. In the first part, we are assigning the name of the Web Server security group from our variables file which we created earlier. We add a helpful description and associate the security group with the previously created VPC.

The next two blocks we add ingress on port 80 (HTTP) for IPv4 and IPv6 from any address. We do the same for port 443 (HTTPS) in the next two blocks.

Now we have 2 egress blocks which technically are not required as security groups are stateful. So why would we add these blocks? We are adding them for two reasons:

  • We need to allow IPv6 egress (::/0), which the default rule does not include.
  • We want to document explicitly for clarity or to prepare for future restrictions.

Note: the -1 for protocol refers to all protocols/network traffic.

Lastly, we tag our resource for identification and tracking purposes.

Let's now build the web app security group.

resource "aws_security_group" "app_servers" {
  name        = var.app_sg_name
  description = "Allow traffic from web servers"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port       = 0
    to_port         = 65535
    protocol        = "tcp"
    security_groups = [aws_security_group.web_servers.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = var.app_sg_name
  }
}
Enter fullscreen mode Exit fullscreen mode

We start by assigning the name for the resource from our variables file. We also associate the resource with our VPC.

In the ingress block we allow the TCP protocol and the associated ports [0 - 65535]. We also reference the web server security group as we want to allow access from the web servers per the lab requirements. For the egress block we allow all outbound access.

Finally we tag the resource for tracking and identification.

Moving on to the next resource, we build our IT administration security group.

resource "aws_security_group" "it_admin" {
  name        = var.it_sg_name
  description = "Allow SSH and RDP from IT admin IPs"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = var.it_admin_ips
  }

  ingress {
    from_port   = 3389
    to_port     = 3389
    protocol    = "tcp"
    cidr_blocks = var.it_admin_ips
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = var.it_sg_name
  }
}
Enter fullscreen mode Exit fullscreen mode

Similar to out other resources we have created thus far, we assign the name from the relevant variable block in our variables.tf file and associate it with our VPC.

We create the ingress blocks for both port 22 (SSH) and port 33389 (RDP). For the CIDR blocks we reference the allowable IPs [172.16.0.0, 192.168.0.0] from the lab specs using variables we have set in variables.tf.

Again, we add an egress block for this resource as explained earlier.

As usual we tag our resource at the end.

Now, we create the Database security group.

resource "aws_security_group" "database" {
  name        = var.db_sg_name
  description = "Allow MySQL access from app servers"
  vpc_id      = aws_vpc.main.id

  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.app_servers.id]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = var.db_sg_name
  }
}
Enter fullscreen mode Exit fullscreen mode

We set the name as usual and associate it with our VPC. Our ingress limits access to the resources having the app servers security group via port 3306 (MySQL).

Similar to our other resources, we have our egress and tags blocks.

Next we are going to create our public NACL and the associated rules. Keep in mind since NACL are stateless, we need matching rules for inbound and outbound traffic. We also space the rule numbers 10 apart as to allow room to add further rules in the future if needed.

resource "aws_network_acl" "public" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = var.public_nacl_name
  }
}
Enter fullscreen mode Exit fullscreen mode

In this block, we create the NACL, associate it with out VPC, and tag it with the relevant variable name.

Now we set up the public ingress rules.

resource "aws_network_acl_rule" "public_ingress" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}
Enter fullscreen mode Exit fullscreen mode

We set up our IPv4 ingress by associating it with our public NACL and allowing all inbound traffic and assign it a rule number.

resource "aws_network_acl_rule" "public_ingress_ipv6" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 110
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "::/0"
  from_port      = 0
  to_port        = 0
}
Enter fullscreen mode Exit fullscreen mode

We do the same for IPv6. Note the the egress = false means that these are for ingress rules.

Since NACLs are stateless we have to have matching egress rules. Similar to how we set up the ingress rules we do the following.

resource "aws_network_acl_rule" "public_egress" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 120
  egress         = true
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}
Enter fullscreen mode Exit fullscreen mode

For IPv4 egress.

resource "aws_network_acl_rule" "public_egress_ipv6" {
  network_acl_id = aws_network_acl.public.id
  rule_number    = 130
  egress         = true
  protocol       = "-1"
  rule_action    = "allow"
  ipv6_cidr_block = "::/0"
  from_port      = 0
  to_port        = 0
}
Enter fullscreen mode Exit fullscreen mode

For IPv6 egress.

If you're following along, you note that the lab requires a private NACL as well. We are doing to do this very similar to what we did for the public NACL with the obvious exceptions for creating a private NACL and having the rules we create associated with that.

resource "aws_network_acl" "private" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = var.private_nacl_name
  }
}

resource "aws_network_acl_rule" "private_ingress" {
  network_acl_id = aws_network_acl.private.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}

resource "aws_network_acl_rule" "private_egress_ipv6" {
  network_acl_id = aws_network_acl.private.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}
Enter fullscreen mode Exit fullscreen mode

That's it. That is our main.tf file. We are now ready to create our outputs.tf file. For this file are are simply outputting the ID of each resource for our reference.

The file will be set up like this.

output "vpc_id" {
  value = aws_vpc.main.id
}

output "web_sg_id" {
  value = aws_security_group.web_servers.id
}

output "app_sg_id" {
  value = aws_security_group.app_servers.id
}

output "it_admin_sg_id" {
  value = aws_security_group.it_admin.id
}

output "database_sg_id" {
  value = aws_security_group.database.id
}

output "public_nacl_id" {
  value = aws_network_acl.public.id
}

output "private_nacl_id" {
  value = aws_network_acl.private.id
}
Enter fullscreen mode Exit fullscreen mode

We have all the files we need. We run a terraform fmt -recursive to format our code properly. Next we run a terraform init to initialize our providers, followed by a terraform validate to ensure there are no errors.

Let's run a terraform plan. Review the resources that will be created. If you are satisfied feel free to run terraform apply -auto-approve to build our resources.

Congratulations, we have successfully completed the lab. Although these resources will cost us no money (at least at the time of this write-up), you may want to run a terraform destroy to remove them at some point to clean up the resources.

You can find the complete files here:

Top comments (0)