DEV Community

Cover image for Configuring an isolated network in AWS
Chabane R. for Stack Labs

Posted on • Updated on

Configuring an isolated network in AWS

In the first part we introduced the security patterns that could be implemented to secure the connectivity between Amazon EKS and Amazon RDS. In this part we will implement the network isolation by deploying the following AWS resources:

  • VPC with eight subnets
    • 2 public and private subnets for Amazon EKS.
    • 2 public and private subnets for Amazon RDS.
  • An Internet Gateway attached to the VPC.
  • NAT gateways attached to the EKS public subnets.
  • Network ACL for each couple of subnets.

Alt Text

VPC

Let's start with the Virtual Private Cloud.

Create a terraform file infra/plan/vpc.tf. A simple VPC resource is created:

resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr_block
  instance_tenancy     = "default"
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = {
    Name        = "main-${var.env}"
    Environment = var.env
  }
}
Enter fullscreen mode Exit fullscreen mode

Subnets

Now we create our 8 subnets.

  • Two public subnets for high-availability. They will host our External Application Load Balancers created by Amazon EKS and all internet facing Kubernetes workloads.
  • Two private subnets for high-availability. They will host our Internal Application Load Balancers created by Amazon EKS and all internal Kubernetes workloads.
  • (Optional) Two other public subnets for high-availability. They will host our External Network Load Balancers to expose our private RDS PostgreSQL instance.
  • Two other private subnets for high-availability. They will host our Amazon RDS PostgreSQL instance.

Create a terraform file infra/plan/subnet.tf

resource "aws_subnet" "private" {
  for_each = {
    for subnet in local.private_nested_config : "${subnet.name}" => subnet
  }

  vpc_id                  = aws_vpc.main.id
  cidr_block              = each.value.cidr_block
  availability_zone       = each.value.az
  map_public_ip_on_launch = false

  tags = {
    Environment = var.env
    Name        = "${each.value.name}-${var.env}"
    "kubernetes.io/role/internal-elb" = each.value.eks ? "1" : ""
  }

  lifecycle {
    ignore_changes = [tags]
  }
}

resource "aws_subnet" "public" {
  for_each = {
    for subnet in local.public_nested_config : "${subnet.name}" => subnet
  }

  vpc_id                  = aws_vpc.main.id
  cidr_block              = each.value.cidr_block
  availability_zone       = each.value.az
  map_public_ip_on_launch = true

  tags = {
    Environment = var.env
    Name        = "${each.value.name}-${var.env}"
    "kubernetes.io/role/elb" = each.value.eks ? "1" : ""
  }

  lifecycle {
    ignore_changes = [tags]
  }
}
Enter fullscreen mode Exit fullscreen mode

I used a local variable to differentiate between each type of subnet.

Create a terraform file infra/plan/variable.tf


variable "private_network_config" {
  type = map(object({
      cidr_block               = string
      az                       = string
      associated_public_subnet = string
      eks                      = bool
  }))

  default = {
    "private-eks-1" = {
        cidr_block               = "10.0.0.0/23"
        az                       = "eu-west-1a"
        associated_public_subnet = "public-eks-1"
        eks                      = true
    },
    "private-eks-2" = {
        cidr_block               = "10.0.2.0/23"
        az                       = "eu-west-1b"
        associated_public_subnet = "public-eks-2"
        eks                      = true
    },
    "private-rds-1" = {
        cidr_block               = "10.0.4.0/24"
        az                       = "eu-west-1a"
        associated_public_subnet = ""
        eks                      = false
    },
    "private-rds-2" = {
        cidr_block               = "10.0.5.0/24"
        az                       = "eu-west-1b"
        associated_public_subnet = ""
        eks                      = false
    }
  }
}

locals {
    private_nested_config = flatten([
        for name, config in var.private_network_config : [
            {
                name                     = name
                cidr_block               = config.cidr_block
                az                       = config.az
                associated_public_subnet = config.associated_public_subnet
                eks                      = config.eks
            }
        ]
    ])
}

variable "public_network_config" {
  type = map(object({
      cidr_block              = string
      az                      = string
      nat_gw                  = bool
      eks                     = bool
  }))

  default = {
    "public-eks-1" = {
        cidr_block = "10.0.6.0/23"
        az = "eu-west-1a"
        nat_gw = true
        eks = true
    },
    "public-eks-2" = {
        cidr_block = "10.0.8.0/23"
        az = "eu-west-1b"
        nat_gw = true
        eks = true
    },
    "public-rds-1" = {
        cidr_block = "10.0.10.0/24"
        az = "eu-west-1a"
        nat_gw = false
        eks = false
    },
    "public-rds-2" = {
        cidr_block = "10.0.11.0/24"
        az = "eu-west-1b"
        nat_gw = false
        eks = false
    }
  }
}

locals {
    public_nested_config = flatten([
        for name, config in var.public_network_config : [
            {
                name                    = name
                cidr_block              = config.cidr_block
                az                      = config.az
                nat_gw                  = config.nat_gw
                eks                     = config.eks
            }
        ]
    ])
}
Enter fullscreen mode Exit fullscreen mode

Internet Gateway

To allow our public Subnets to communicate with the internet, we need to create an internet gateway and associate it to the public subnets using route tables.

Create a terraform file infra/plan/igw.tf

resource "aws_internet_gateway" "igw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Environment = var.env
    Name        = "igw-${var.env}"
  }
}

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }

  tags = {
    Environment = var.env
    Name        = "rt-public-${var.env}"
  }
}

resource "aws_route_table_association" "public" {
  for_each = {
    for subnet in local.public_nested_config : "${subnet.name}" => subnet
  }

  subnet_id      = aws_subnet.public[each.value.name].id
  route_table_id = aws_route_table.public.id
}
Enter fullscreen mode Exit fullscreen mode

NAT gateways

In order to allow our private subnets used by Amazon EKS to access the internet, we need to create NAT Gateways on the public subnets used by Amazon EKS. We associate NAT Gateways with private subnets using route tables.

Create a terraform file infra/plan/nat.tf

resource "aws_eip" "nat" {
  for_each = {
    for subnet in local.public_nested_config : "${subnet.name}" => subnet
    if subnet.nat_gw == true
  }

  vpc      = true

  tags = {
    Environment = var.env
    Name        = "eip-${each.value.name}-${var.env}"
  }
}

resource "aws_nat_gateway" "nat-gw" {
  for_each = {
    for subnet in local.public_nested_config : "${subnet.name}" => subnet
    if subnet.nat_gw == true
  }

  allocation_id = aws_eip.nat[each.value.name].id
  subnet_id     = aws_subnet.public[each.value.name].id

  tags = {
    Environment = var.env
    Name        = "nat-${each.value.name}-${var.env}"
  }
}

resource "aws_route_table" "private" {
  for_each = {
    for subnet in local.public_nested_config : "${subnet.name}" => subnet
    if subnet.nat_gw == true
  }

  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat-gw[each.value.name].id
  }

  tags = {
    Environment = var.env
    Name        = "rt-${each.value.name}-${var.env}"
  }
}

resource "aws_route_table_association" "private" {

  for_each = {
    for subnet in local.private_nested_config : "${subnet.name}" => subnet
    if subnet.associated_public_subnet != ""
  }

  subnet_id      = aws_subnet.private[each.value.name].id
  route_table_id = aws_route_table.private[each.value.associated_public_subnet].id
}
Enter fullscreen mode Exit fullscreen mode

Network Access Control List

Network ACL allows us to restrict the inbound and outbound network traffic to and from a subnet. In our case, we can implement the following rules:

  • EKS private and public subnets, allow all inbound / outbound network traffic. We need to have these rules to allow Amazon EKS Control Plane to communicate with Worker nodes.
  • RDS public subnets
    • allow all inbound / outbound TCP network traffic to RDS private subnets
    • allow tcp inbound / outbound TCP network traffic to a specific range of IP addresses only on the RDS port.
  • RDS private subnet
    • allow inbound traffic on the RDS port from EKS private subnets and all TCP traffic from RDS private subnets.
    • allow all outgoing TCP network traffic to EKS private subnets and RDS public subnets.

Create a terraform file infra/plan/nacl.tf

resource "aws_network_acl" "eks-external-zone" {
  vpc_id = aws_vpc.main.id

  subnet_ids = [aws_subnet.public["public-eks-1"].id, aws_subnet.public["public-eks-2"].id]

  tags = {
    Name        = "eks-external-zone-${var.env}"
    Environment = var.env
  }
}

resource "aws_network_acl_rule" "eks-ingress-external-zone-rules" {
  network_acl_id = aws_network_acl.eks-external-zone.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}

resource "aws_network_acl_rule" "eks-egress-external-zone-rules" {
  network_acl_id = aws_network_acl.eks-external-zone.id
  rule_number    = 100
  egress         = true
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}

resource "aws_network_acl" "eks-internal-zone" {
  vpc_id = aws_vpc.main.id

  subnet_ids = [aws_subnet.private["private-eks-1"].id, aws_subnet.private["private-eks-2"].id]

  tags = {
    Name        = "eks-internal-zone-${var.env}"
    Environment = var.env
  }
}

resource "aws_network_acl_rule" "ingress-internal-zone-rules" {
  network_acl_id = aws_network_acl.eks-internal-zone.id
  rule_number    = 100
  egress         = false
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}

resource "aws_network_acl_rule" "egress-internal-zone-rules" {
  network_acl_id = aws_network_acl.eks-internal-zone.id
  rule_number    = 100
  egress         = true
  protocol       = "-1"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 0
}

resource "aws_network_acl" "rds-external-zone" {
  vpc_id = aws_vpc.main.id

  subnet_ids = [aws_subnet.public["public-rds-1"].id, aws_subnet.public["public-rds-2"].id]

  tags = {
    Name        = "rds-external-zone-${var.env}"
    Environment = var.env
  }
}

locals {
  nacl_ingress_rds_external_zone_infos = flatten([{
      cidr_block = var.internal_ip_range
      priority   = 100
      from_port  = var.rds_port
      to_port    = var.rds_port
  }, {
      cidr_block = aws_subnet.private["private-rds-1"].cidr_block
      priority   = 101
      from_port  = 0
      to_port    = 65535
  },{
      cidr_block = aws_subnet.private["private-rds-2"].cidr_block
      priority   = 102
      from_port  = 0
      to_port    = 65535
  },{
      cidr_block = aws_subnet.public["public-rds-1"].cidr_block
      priority   = 103
      from_port  = 0
      to_port    = 65535
  },{
      cidr_block = aws_subnet.public["public-rds-2"].cidr_block
      priority   = 104
      from_port  = 0
      to_port    = 65535
  }]) 
}

resource "aws_network_acl_rule" "rds-ingress-external-zone-rules" {
  for_each  = {
    for subnet in local.nacl_ingress_rds_external_zone_infos : "${subnet.priority}" => subnet
  }

  network_acl_id = aws_network_acl.rds-external-zone.id
  rule_number    = each.value.priority
  egress         = false
  protocol       = "tcp"
  rule_action    = "allow"
  cidr_block     = each.value.cidr_block
  from_port      = each.value.from_port
  to_port        = each.value.to_port
}

resource "aws_network_acl_rule" "rds-egress-external-zone-rules" {
  network_acl_id = aws_network_acl.rds-external-zone.id
  rule_number    = 100
  egress         = true
  protocol       = "tcp"
  rule_action    = "allow"
  cidr_block     = "0.0.0.0/0"
  from_port      = 0
  to_port        = 65535
}

resource "aws_network_acl" "rds-secure-zone" {
  vpc_id = aws_vpc.main.id

  subnet_ids = [aws_subnet.private["private-rds-1"].id, aws_subnet.private["private-rds-2"].id]

  tags = {
    Name        = "rds-secure-zone-${var.env}"
    Environment = var.env
  }
}

locals {
  nacl_secure_ingress_egress_infos = flatten([{
      cidr_block = aws_subnet.private["private-eks-1"].cidr_block
      priority   = 101
      from_port  = var.rds_port
      to_port    = var.rds_port
  },{
      cidr_block = aws_subnet.private["private-eks-2"].cidr_block
      priority   = 102
      from_port  = var.rds_port
      to_port    = var.rds_port
  },{
      cidr_block = aws_subnet.private["private-rds-1"].cidr_block
      priority   = 103
      from_port  = 0
      to_port    = 65535
  },{
      cidr_block = aws_subnet.private["private-rds-2"].cidr_block
      priority   = 104
      from_port  = 0
      to_port    = 65535
  },{
      cidr_block = aws_subnet.public["public-rds-1"].cidr_block
      priority   = 105
      from_port  = 0
      to_port    = 65535
  },{
      cidr_block = aws_subnet.public["public-rds-2"].cidr_block
      priority   = 106
      from_port  = 0
      to_port    = 65535
  }]) 
}

resource "aws_network_acl_rule" "ingress-secure-zone-rules" {
  for_each  = {
    for subnet in local.nacl_secure_ingress_egress_infos : "${subnet.priority}" => subnet
  }

  network_acl_id = aws_network_acl.rds-secure-zone.id
  rule_number    = each.value.priority
  egress         = false
  protocol       = "tcp"
  rule_action    = "allow"
  cidr_block     = each.value.cidr_block
  from_port      = each.value.from_port
  to_port        = each.value.to_port
}

resource "aws_network_acl_rule" "egress-secure-zone-rules" {
  for_each  = {
    for subnet in local.nacl_secure_ingress_egress_infos : "${subnet.priority}" => subnet
  }
  network_acl_id = aws_network_acl.rds-secure-zone.id
  rule_number    = each.value.priority
  egress         = true
  protocol       = "tcp"
  rule_action    = "allow"
  cidr_block     = each.value.cidr_block
  from_port      = 0
  to_port        = 65535
}
Enter fullscreen mode Exit fullscreen mode

Let's configure our terraform.

Complete the infra/plan/variable.tf

variable "region" {
  type    = string
  default = "eu-west-1"
}

variable "az" {
  type    = list(string)
  default = ["eu-west-1a", "eu-west-1b"]
}

variable "env" {
  type = string
}

variable "vpc_cidr_block" {
  type = string
}

variable "internal_ip_range" {
    type = string
}
Enter fullscreen mode Exit fullscreen mode

Add a infra/plan/main.tf file

data "aws_caller_identity" "current" {}
Enter fullscreen mode Exit fullscreen mode

Add a infra/plan/version.tf file

terraform {
  required_version = ">= 0.12"
}
Enter fullscreen mode Exit fullscreen mode

Add a infra/plan/provider.tf file

provider "aws" {
  region = var.region
}
Enter fullscreen mode Exit fullscreen mode

And a infra/plan/backend.tf

terraform {
  backend "s3" {
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, export the following variables and create a bucket to save your terraform states.

export ENV=<ENV>
export REGION=eu-west-1
export EKS_CLUSTER_NAME=eks-cluster-$ENV
export AWS_PROFILE=<AWS_PROFILE>
export INTERNAL_IP_RANGE=<LOCAL_OR_INTERNAL_IP_RANGES>
export TERRAFORM_BUCKET_NAME=<BUCKET_NAME>

# Create bucket
aws s3api create-bucket \
     --bucket $TERRAFORM_BUCKET_NAME \
     --region $REGION \
     --create-bucket-configuration LocationConstraint=$REGION

# Make it not public     
aws s3api put-public-access-block \
    --bucket $TERRAFORM_BUCKET_NAME \
    --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

# Enable versioning
aws s3api put-bucket-versioning \
    --bucket $TERRAFORM_BUCKET_NAME \
    --versioning-configuration Status=Enabled
Enter fullscreen mode Exit fullscreen mode

Create a infra/envs/$ENV/terraform.tfvars and deploy the infrastructure:

env               = "<ENV>"
vpc_cidr_block    = "10.0.0.0/16"
internal_ip_range = "<INTERNAL_IP_RANGE>"
az                = ["eu-west-1a", "eu-west-1b"]
Enter fullscreen mode Exit fullscreen mode
cd infra/envs/dev

sed -i "s,<INTERNAL_IP_RANGE>,$INTERNAL_IP_RANGE,g; s,<ENV>,$ENV,g" terraform.tfvars

terraform init \
    -backend-config="bucket=$TERRAFORM_BUCKET_NAME" \
    -backend-config="key=$ENV/terraform-state" \
    -backend-config="region=$REGION" \
../../plan/ 

terraform apply ../../plan/ 
Enter fullscreen mode Exit fullscreen mode

Let's check if all the resources have been created and are working correctly

VPC

Alt Text

Subnets

Alt Text

Internet Gateway

Alt Text

NAT Gateways

Alt Text

Conclusion

Our network is now ready to host our AWS resources. In the next part, we will focus on setting up Amazon EKS.

Top comments (8)

Collapse
 
yimapichai profile image
Jim Eric Skogman • Edited

Hi, great work and thank you for sharing!
A quick question if I may. It says that:
"Internet facing workloads will reside on a public node group deployed on public subnets."
But right now pods with the "public" nodeSelector cannot communicate with the internet, but pods with the "private" nodeSelector can 🤔Or am I mistaken?

Collapse
 
chabane profile image
Chabane R.

hello

thanks for your contribution :-)

In this article we didn't deploy a workloads in the public nodegroup. A workload deployed in the public nodegroup could have access to the internet thanks to the internet gateway. The workloads deployed in the private nodegroup have access to the internet thatnks to the NAT GW.

Did you deploy a workload in the public nodegroup? We should deploy only ELB in the public nodegroup.

Collapse
 
yimapichai profile image
Jim Eric Skogman • Edited

Thank you so much for your response!
I see, so with this architecture, pods and applications should be kept on the private node group and access the internet through the NAT GW. In that case, if I wanted to deploy an Nginx ingress controller, should I deploy that to the private or public nodegroups?

Thank you again for your time and your hard work 🙏

Thread Thread
 
chabane profile image
Chabane R. • Edited

That's a good question. Your nginx ingress controller could create a network load balancer and it will be deployed in the public subnet.

aws.amazon.com/blogs/opensource/ne...

Even if your nginx ingress controler is deployed in the public nodegroup, it's supposed to have access to the public internet

The NACL "eks-ingress-external-zone-rules" allows access to all inbound and outbound traffic.

(You can try to replace to replace

  from_port  = 0
  to_port       = 0
Enter fullscreen mode Exit fullscreen mode

by

 from_port  = 0
  to_port     = 65535)
Enter fullscreen mode Exit fullscreen mode

A route table associates the IGW with the public subnets (eks+RDS)

So the issue could be elsewhere.

Before writing this post, I tested the solution proposed on this medium post: blog.devgenius.io/create-an-amazon...

Maybe it works with his terraform?

Thread Thread
 
yimapichai profile image
Jim Eric Skogman • Edited

Thank you again for your feedback, and for the link to that article.
I tried running a simple curl from a busybox pod deployed on a node in the public subnet earlier but it didn't seem to work, I'll try changing the ports you mentioned and test again.

Collapse
 
okobylianskyi profile image
Oleksandr Kobylianskyi

Is there any chance to make EKS private and public subnets ACLs less permissive? Allowing all inbound / outbound network traffic leads to certain security audit and compliance issues and I need to allow only specific minimum traffic. Not much info about this over the Internet. This article is probably the only one I found so far that touches network ACLs topic :). And btw, thank you a lot, it's anyway pretty helpful.

Collapse
 
chabane profile image
Chabane R. • Edited

Thanks for your comment

Yes, you can be less permissive. You can apply the same permissions on ports as security groups:

docs.aws.amazon.com/eks/latest/use...

This article creates a private/public cluster but you can have a fully private cluster

docs.aws.amazon.com/eks/latest/use...

Collapse
 
jv4n5e profile image
jv4n5e • Edited

Hi!
I am a bit new to terraform and EKS. Could someone please explain what should be placed as

<INTERNAL_IP_RANGE>
Enter fullscreen mode Exit fullscreen mode

Should this be a subnet or an IP address range? And what is the logic behind this?

EDIT: Heading into part 3 of this series, I noticed that this value is called for the vpc_config.public_access_cidrs key in the eks-cluster.tf file, meaning that the range is meant to limit the IP addresses on the internet that would have access to the nodes, correct? Please someone shed some light into this.

Great work and great article btw! Thanks for sharing.