DEV Community

Cover image for Day 20: Terraform Custom Modules for EKS
Anil KUMAR
Anil KUMAR

Posted on

Day 20: Terraform Custom Modules for EKS

Welcome to the Day 20 of 30 days of AWS Terraform challenge initiative by Piyush Sachdeva. In this blog, we will understand about modules, what exactly are modules, when we will use them and what are the use cases of modules in Terraform real life use cases.

Modules:

Modules are nothing but a reusable piece of code in which you can encapsulate all the complexity of Terraform and can reuse them in all the possible situations.

In other words, Modules are self-contained packages of configuration code that you create yourself to group related resources together. Think of them as custom functions for your infrastructure: you define the logic once, and then call it multiple times with different parameters. In other programming languages, we have functions whereas in terraform, modules will replace them.

Use Cases of Modules

Abstraction - You want to hide 50 lines of complex networking code behind a simple 5-line block.

Reduced Complexity — A custom module lets you expose only few variables your team actually cares about.

Consistency — ensuring every EKS cluster in your company is built exactly the same way across Dev, Staging, and Prod.

Types of Modules:

Modules are further divided into 3 types based on their use cases and functionality.

Public Modules:

Public Modules are the modules which have been built and maintained by Cloud providers like AWS, Azure, Hashicorp, GCP. These modules will be developed by those cloud providers and will be available for anyone to use without any restrictions.

These are community-driven modules hosted on the Terraform Registry. They are publicly accessible and free for anyone to use.

Maintained by: Individual contributors, the open-source community, or smaller organizations.

Key Feature: Great for common tasks (e.g., setting up a basic S3 bucket or a generic VPC).

Risk: Quality and maintenance can vary; always check the download count and "stars" before using.

Partner Modules:

Partner modules are the modules which are maintained by both Hashicorp and the Partner, Here Hashicorp will also be maintaining that Modules along with the partner.

These are a subset of public modules but carry a "Verified" or "Partner" badge on the Terraform Registry.

Maintained by: Major technology companies (like AWS, Azure, Google Cloud, or HashiCorp itself) in partnership with HashiCorp.

Key Feature: These undergo a rigorous verification process to ensure they follow best practices and are actively maintained.

Use Case: Ideal for mission-critical infrastructure where stability and official support are required.

Custom Modules:

Custom modules are the modules which will be developed by the community or the Individuals.
We can customize these modules based on the requirements we have and can utilize them based on our test cases.

These are modules created internally by you or your organization to meet specific business needs or security standards.

Maintained by: Your internal DevOps or Platform teams.

Key Feature: They often wrap public/partner modules to "hard-code" organizational standards (e.g., always enabling encryption or specific tagging).

Source: Usually stored in a Private Module Registry (via HCP Terraform/Enterprise) or directly in a private Git repository.

Modules working flow Architecture:

In today blog, we will see how this Terraform configuration demonstrates custom module creation for EKS cluster deployment.

Generally for the creation of EKS Cluster, we need mainly VPC for networking, IAM for roles of worker nodes, eks for creating master and worker nodes and also ec2 for creating required instances.

Structure:

modules/
├── vpc/              # Custom VPC module
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── iam/              # Custom IAM roles module
│   ├── main.tf
│   ├── variables.tf
│   └── outputs.tf
├── eks/              # Custom EKS cluster module
│   ├── main.tf
│   ├── variables.tf
│   ├── outputs.tf
│   └── templates/
│       └── userdata.sh
└── secrets-manager/  # Custom Secrets Manager module
    ├── main.tf
    ├── variables.tf
    └── outputs.tf
Enter fullscreen mode Exit fullscreen mode

Modules Overview:

1. VPC Module (modules/vpc/):

Creates networking infrastructure:

  • VPC with custom CIDR
  • Public subnets (3 AZs) with Internet Gateway
  • Private subnets (3 AZs) with NAT Gateway
  • Route tables and associations
  • EKS-required subnet tags

2. IAM Module (modules/iam/)

Creates IAM resources:

  • EKS cluster IAM role with policies
  • Node group IAM role with policies
  • OIDC provider for IRSA (IAM Roles for Service Accounts)

3. EKS Module (modules/eks/)

Creates EKS cluster resources:

  • EKS control plane with KMS encryption
  • CloudWatch log group
  • Security groups (cluster + nodes)
  • EKS addons (CoreDNS, kube-proxy, VPC CNI)
  • Managed node groups with launch templates
  • Customizable node group configurations

4. Secrets Manager Module (modules/secrets-manager/)

Creates secrets management resources:

  • KMS key for secrets encryption
  • Database credentials secret (optional)
  • API keys secret (optional)
  • Application config secret (optional)
  • IAM policy for reading secrets

The setup includes:

VPC: Custom VPC with public and private subnets across 3 availability zones

EKS Cluster: Managed Kubernetes cluster with latest version
Node Groups: General purpose node group (on-demand instances), Spot instance node group for cost optimization
Add-ons: CoreDNS, kube-proxy, VPC CNI, and EBS CSI driver
IRSA: IAM Roles for Service Accounts enabled for fine-grained permissions

Think of this structure as Parent and Child Modules.

Inside each module be it Parent module or child module of VPC, IAM or EKS, we will be having separate terraform files like main.tf, variables.tf, output.tf, providers.tf amd so on..

Variables.tf and outputs.tf plays a key role as they help in how the root and child module communicates with each other.

Also main.tf of root and child modules will also communicate with each other mainly with Call by Parameters.

Lets deep dive into the VPC module now:

Below is the main.tf code for root Module:

# Custom VPC Module
module "vpc" {
  source = "./modules/vpc"

  name_prefix     = var.cluster_name
  vpc_cidr        = var.vpc_cidr
  azs             = slice(data.aws_availability_zones.available.names, 0, 3)
  private_subnets = var.private_subnets
  public_subnets  = var.public_subnets

  enable_nat_gateway = true
  single_nat_gateway = true

  # Required tags for EKS
  public_subnet_tags = {
    "kubernetes.io/role/elb"                    = "1"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb"           = "1"
    "kubernetes.io/cluster/${var.cluster_name}" = "shared"
  }

  tags = {
    Environment = var.environment
    Terraform   = "true"
    Project     = "EKS-Day20"
  }
}
Enter fullscreen mode Exit fullscreen mode

Instead of writing out the entire EKS resource, your Root configuration (where you run terraform apply) looks like this:

Here you can see we have not written the configuration for VPC instead pointed out the source block to the custom module we have written for VPC.

source = "./modules/vpc"
Enter fullscreen mode Exit fullscreen mode

Lets take an example of a argument and see how the parameters are passed between Root and child modules.

  azs = slice(data.aws_availability_zones.available.names, 0, 3)
Enter fullscreen mode Exit fullscreen mode

In the above block, you can see that we have taken the availability zones from a Data source instead of hardcoding and storing that in a variable named azs.

Now to reference the value of azs in the VPC module, we need to pass the variable azs inside the variables.tf of VPC module and then use it in the main.tf of VPC module, so that it understands what is the data type and the value it is holding from the main module.

variable "azs" {
  description = "List of availability zones"
  type        = list(string)
}
Enter fullscreen mode Exit fullscreen mode
# VPC
resource "aws_vpc" "main" {
  cidr_block           = var.vpc_cidr
  enable_dns_hostnames = true
  enable_dns_support   = true
}

# Public Subnets
resource "aws_subnet" "public" {
  count                   = length(var.public_subnets)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_subnets[count.index]
  availability_zone       = var.azs[count.index]
  map_public_ip_on_launch = true
}

# Private Subnets
resource "aws_subnet" "private" {
  count             = length(var.private_subnets)
  vpc_id            = aws_vpc.main.id
  cidr_block        = var.private_subnets[count.index]
  availability_zone = var.azs[count.index]
}
Enter fullscreen mode Exit fullscreen mode

So whatever the values we are passing in the variables.tf of root module must be passed to the variables.tf of child module too.

These custom modules doesn't talk with each other like VPC module doesn't communicate directly with EKS module or vice-versa instead they will communicate with main module and main module will communicate with all the child modules.

Also if we want to pass any value from child module to root module, we can output that vale in child module and then reference that value in main.tf of root module.

Also we can add dependency in modules:

# Custom EKS Module
module "eks" {
  source = "./modules/eks"

  cluster_name       = var.cluster_name
  kubernetes_version = var.kubernetes_version
  vpc_id             = module.vpc.vpc_id
  subnet_ids         = module.vpc.private_subnets

  cluster_role_arn = module.iam.cluster_role_arn
  node_role_arn    = module.iam.node_group_role_arn

  endpoint_public_access  = true
  endpoint_private_access = true
  public_access_cidrs     = ["0.0.0.0/0"]

  enable_irsa = true
Enter fullscreen mode Exit fullscreen mode

In the above block, we can see we have mentioned about vpc_id and subnet_id in EKS module, so EKS module will only be created after the complete creation of VPC module.

You can also see the vpc_id and subnet_ids are referencing to the VPC module where they have been declared in the output.tf of VPC module.

output "vpc_id" {
  description = "The ID of the VPC"
  value       = aws_vpc.main.id
}

output "vpc_cidr_block" {
  description = "The CIDR block of the VPC"
  value       = aws_vpc.main.cidr_block
}

output "public_subnets" {
  description = "List of IDs of public subnets"
  value       = aws_subnet.public[*].id
}

output "private_subnets" {
  description = "List of IDs of private subnets"
  value       = aws_subnet.private[*].id
}
Enter fullscreen mode Exit fullscreen mode

So the values in output.tf of VPC module was referenced in the root module main.tf using variables.tf and output.tf.

Hope you understand the flow.

Execution:

tf init
tf plan
tf apply
Enter fullscreen mode Exit fullscreen mode

By executing the above commands you could see the infrastructure getting created, first VPC and IAM will be created parallely followed by EKS.

Conclusion:

Custom Terraform modules transform infrastructure from scripts into systems. For EKS in particular, modular design is the difference between a demo cluster and a production-grade platform.

This concludes Day 20 of custom modules for EKS Cluster. See you in the next blog.

Top comments (0)