DEV Community

Cover image for Building a Custom VPC and EKS Cluster on AWS with Terraform
Vinod Kumar
Vinod Kumar

Posted on

Building a Custom VPC and EKS Cluster on AWS with Terraform

In today’s rapidly evolving cloud infrastructure landscape, managing resources efficiently and securely is paramount. Infrastructure as Code (IaC) tools like Terraform empower developers and operators to automate the provisioning and management of cloud resources. In this tech blog, we will delve into how Terraform can be utilized to create a custom Virtual Private Cloud (VPC) and an Amazon Elastic Kubernetes Service (EKS) cluster atop it.

Terraform Configuration

You can visit HashiCorp site here to install the Terraform on your machine depending upon your Operating System.

In this blog, we are going to provision a Custom VPC with Security Group and then provision and EKS (Elastic Kubernetes Service) with autoscaling on AWS.

Now, let’s take a closer look at the Terraform script used to orchestrate our infrastructure.

It is always recommended to split these resources into multiple .tf files (terraform files) as they easy to manage and debug later.

Defining variables — Variables like the version of the EKS Cluster (1.29), VPC CIDR range and AWS region can be defined in variables.tf file and can be used across other terraform configuration

## Variables
variable "eks_version" {
  default     = 1.29
  description = "EKS version"
}

variable "vpc_cidr" {
  default     = "10.0.0.0/16"
  description = "CIDR range of the VPC"
}

variable "aws_region" {
  default = "us-east-1"
  description = "AWS Region"
}

# Configure the AWS Provider
provider "aws" {
  region = var.aws_region
  profile = "myaws"
}
Enter fullscreen mode Exit fullscreen mode

Custom VPC and Security Group — The script starts by defining a custom VPC using the terraform-aws-modules/vpc/aws module. It specifies VPC characteristics like CIDR block, availability zones, public and private subnets, and DNS settings. An AWS security group is created to manage inbound and outbound traffic for EKS worker nodes. Ingress and egress rules are defined to control traffic flow. Copy the below script into vpc.tf file.

## VPC setup
data "aws_availability_zones" "available" {}

locals {
  cluster_name = "myekscluster-${random_string.suffix.result}"
}

resource "random_string" "suffix" {
  length  = 9
  special = false
}

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "5.7.0"

  name                 = "myvpc-for-eks"
  cidr                 = var.vpc_cidr
  azs                  = data.aws_availability_zones.available.names
  private_subnets      = ["10.0.1.0/24", "10.0.2.0/24"]
  public_subnets       = ["10.0.4.0/24", "10.0.5.0/24"]
  enable_nat_gateway   = true
  single_nat_gateway   = true
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "owner" = "Vinod"
  }

  public_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/elb"                      = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/cluster/${local.cluster_name}" = "shared"
    "kubernetes.io/role/internal-elb"             = "1"
  }

}

## Security group setup
resource "aws_security_group" "all_worker_mgmt" {
  name_prefix = "all_worker_management"
  vpc_id      = module.vpc.vpc_id
}

resource "aws_security_group_rule" "all_worker_mgmt_ingress" {
  description       = "Allow inbound traffic from eks"
  from_port         = 0
  protocol          = "-1"
  to_port           = 0
  security_group_id = aws_security_group.all_worker_mgmt.id
  type              = "ingress"
  cidr_blocks = [
    "10.0.0.0/8",
    "172.16.0.0/12",
    "192.168.0.0/16",
  ]
}

resource "aws_security_group_rule" "all_worker_mgmt_egress" {
  description       = "Allow outbound traffic to anywhere"
  from_port         = 0
  protocol          = "-1"
  security_group_id = aws_security_group.all_worker_mgmt.id
  to_port           = 0
  type              = "egress"
  cidr_blocks       = ["0.0.0.0/0"]
}
Enter fullscreen mode Exit fullscreen mode

EKS Cluster — The EKS cluster is provisioned using the terraform-aws-modules/eks/aws module. A file eks.tf can be created with below content like cluster name, version, subnet IDs, and other configurations such as managed node groups to provision the resources using terraform.

## EKS setup with autoscaling on worker nodes
module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  cluster_name    = local.cluster_name
  cluster_version = var.eks_version
  subnet_ids      = module.vpc.private_subnets

  enable_irsa = true

  tags = {
    cluster = "demo-cluster"
  }

  vpc_id = module.vpc.vpc_id

  eks_managed_node_group_defaults = {
    ami_type               = "AL2_x86_64"
    instance_types         = ["t3.medium"]
    vpc_security_group_ids = [aws_security_group.all_worker_mgmt.id]
  }

  eks_managed_node_groups = {
    node_group = {
      min_size     = 1
      max_size     = 3
      desired_size = 2
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Outputs — The outputs are optional though but very useful to refer the resources created through terraform later like the cluster endpoint, cluster id, etc. It can be created with following contents in a file outputs.tf file.

## Outputs
output "cluster_id" {
  description = "EKS cluster id"
  value       = module.eks.cluster_id
}

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane."
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane."
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.aws_region
}

output "oidc_provider_arn" {
  description = "ARN of OIDC Provider"
  value = module.eks.oidc_provider_arn
}
Enter fullscreen mode Exit fullscreen mode

And finally, you need versions.tf file with following script:-

terraform {
  required_providers {
    random = {
      source  = "hashicorp/random"
      version = "~> 3.1.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">=2.7.1"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
    local = {
      source  = "hashicorp/local"
      version = "~> 2.1.0"
    }
    null = {
      source  = "hashicorp/null"
      version = "~> 3.1.0"
    }
    cloudinit = {
      source  = "hashicorp/cloudinit"
      version = "~> 2.2.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Once you have all these terraform files ready then you can run following command to initialize the terraform provider plugin (AWS in this example) and modules.

terraform init
Perform a dry run using following command to see all the changes that would be applied:

terraform plan

And then execute following to finally creating the resources on AWS:

terraform apply --auto-approve

Conclusion

By leveraging Terraform, we have automated the creation of a custom VPC and an EKS cluster, simplifying infrastructure management and ensuring consistency across environments. This script serves as a foundation for building scalable and resilient Kubernetes clusters on AWS. Whether you’re deploying a development sandbox or a production-grade environment, Terraform provides the flexibility and control needed to meet your requirements.

Top comments (0)