Day 8 of the 30-Day Terraform Challenge — and today I learned the secret that separates people who "know Terraform" from people who actually build infrastructure at scale.
Modules.
You know that feeling when you've written the same security group configuration three times? Or when you're about to copy-paste your entire web server cluster for the fifth environment? That's the feeling modules were made to eliminate.
The Problem: Copy-Paste Engineering
Let me show you what I was doing before today:
# dev/main.tf
resource "aws_lb" "web" {
name = "dev-web-alb"
# ... 200 more lines ...
}
# staging/main.tf
resource "aws_lb" "web" {
name = "staging-web-alb"
# ... THE SAME 200 lines, different name ...
}
# production/main.tf
resource "aws_lb" "web" {
name = "prod-web-alb"
# ... 200 lines, again ...
}
This is what we call Copy-Paste Engineering. It's fast. It works. And it's a nightmare to maintain.
Change the health check path? That's 3 files. Fix a security group rule? 3 files. Update the AMI filter? You guessed it — 3 files. And if you forget one? Now dev and production are different, and nobody knows why.
There had to be a better way.
The Solution: Modules
A module is just a folder of Terraform code that you can call from other Terraform configurations. That's it. No magic. No special syntax.
But that simple concept changes everything.
The Module Structure
Here's what I built today:
modules/
└── services/
└── webserver-cluster/
├── main.tf # The infrastructure (ALB, ASG, SG)
├── variables.tf # What you can configure
├── outputs.tf # What you get back
└── README.md # How to use it
The Variables (What You Can Configure)
variable "cluster_name" {
description = "Name for all cluster resources"
type = string
# No default — caller MUST provide this
}
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t3.micro"
}
variable "min_size" {
description = "Minimum instances in ASG"
type = number
default = 1
}
variable "max_size" {
description = "Maximum instances in ASG"
type = number
default = 5
}
variable "environment" {
description = "Environment name (dev, staging, prod)"
type = string
default = "dev"
}
Every configurable aspect of the infrastructure is an input variable. Nothing is hardcoded.
The Outputs (What You Get Back)
output "alb_dns_name" {
description = "DNS name of the load balancer"
value = aws_lb.web.dns_name
}
output "alb_url" {
description = "Full URL to access the cluster"
value = "http://${aws_lb.web.dns_name}"
}
output "asg_name" {
description = "Name of the Auto Scaling Group"
value = aws_autoscaling_group.web.name
}
Callers get back exactly what they need — no more, no less.
The Main File (The Infrastructure)
Inside main.tf is all the code I've been writing all week. But now it uses variables instead of hardcoded values:
resource "aws_lb" "web" {
name = "${var.cluster_name}-alb" # No hardcoding!
security_groups = [aws_security_group.alb.id]
subnets = data.aws_subnets.default.ids
}
The Magic: Calling the Module
Here's where it gets beautiful. For dev:
# live/dev/services/webserver-cluster/main.tf
module "webserver_cluster" {
source = "../../../../modules/services/webserver-cluster"
cluster_name = "webservers-dev"
instance_type = "t3.micro"
min_size = 1
max_size = 2
environment = "dev"
}
output "alb_url" {
value = module.webserver_cluster.alb_url
}
For production:
# live/production/services/webserver-cluster/main.tf
module "webserver_cluster" {
source = "../../../../modules/services/webserver-cluster"
cluster_name = "webservers-production"
instance_type = "t3.medium" # Bigger!
min_size = 2
max_size = 5
environment = "production"
}
output "alb_url" {
value = module.webserver_cluster.alb_url
}
Same module. Different inputs. Zero code duplication.
When I need to update the health check path? I change it in ONE file — the module. Every environment gets the update automatically.
The Moment I Knew It Worked
I deployed dev first:
$ cd live/dev/services/webserver-cluster
$ terraform init
$ terraform apply
Apply complete! Outputs:
alb_url = "http://webservers-dev-alb-xxxxx.elb.amazonaws.com"
I opened the URL. There it was — my web page with "webservers-dev" on top.
Then I looked at production (didn't deploy, just previewed):
$ cd live/production/services/webserver-cluster
$ terraform plan
# Notice the instance type:
module.webserver_cluster.aws_launch_template.web
instance_type = "t3.medium" # Dev used t3.micro!
The same code, producing different infrastructure. This is how real teams work.
Module Design Decisions I Had to Make
What to Expose vs What to Hide
I chose to expose:
- Cluster name — caller must provide it (no default)
- Instance type — different sizes for different environments
- Min/max sizes — dev can run with 1 instance, production needs 3+
- Environment — for tagging and naming
I kept internal:
- AMI lookup — everyone gets the latest Amazon Linux 2
- VPC selection — always use the default VPC
- Security group structure — always the same pattern
The rule: Expose what changes between environments. Hide what stays the same.
What Happens If Someone Forgets a Required Variable?
module "broken" {
source = "../../../../modules/services/webserver-cluster"
# No cluster_name provided!
}
Error: Missing required argument
The argument "cluster_name" is required, but no definition was found.
Terraform catches it. The caller knows exactly what they missed. This is why required variables are so important — they prevent silent failures.
Chapter 4 Learnings
Root Module vs Child Module:
The configuration you run is the root module. Any module you call is a child module. There's no technical difference — just who's calling who.
What terraform init Does:
When you add a new module source, terraform init downloads the module code into .terraform/modules/. It doesn't apply anything — just makes the code available.
Module Outputs in State:
Module outputs are stored in the state file under the module's name. If you look in terraform.tfstate, you'll see:
"outputs": {
"alb_url": {
"value": "http://webservers-dev-alb-xxxxx.elb.amazonaws.com"
}
}
This is how other configurations can read outputs from your module.
Challenges I Hit (And How I Fixed Them)
Challenge 1: Relative Paths
I kept getting source path does not exist errors. The problem? I was counting wrong.
From live/dev/services/webserver-cluster/ to modules/services/webserver-cluster/:
-
../= back tolive/dev/services/ -
../../= back tolive/dev/ -
../../../= back tolive/ -
../../../../= back to project root, then intomodules/
Fix: Print your current directory and count carefully!
Challenge 2: Variable Type Mismatch
I passed min_size = "2" (string) but my variable expected a number. Terraform gave me:
Error: Incorrect attribute value type
Inappropriate value for attribute "min_size": number required.
Fix: Always use the right type — numbers without quotes, lists with brackets.
Challenge 3: Missing Outputs
My module created an ALB, but I forgot to output its DNS name. The caller couldn't access anything!
Fix: Ask yourself "what does someone using this module need to know?" Output those things.
The Difference Between a Good Module and a Painful Module
| Good Module | Painful Module |
|---|---|
| Clear, specific variable names | Vague names like var1, var2
|
| Every variable has a description | No descriptions — guess what it does |
| Sensible defaults for optional values | Everything is required |
| Useful outputs (DNS names, IDs) | No outputs — caller can't get anything |
| Has a README | "Just read the code" |
| One clear purpose | Tries to do everything |
Best Practices I Learned
Use the
sourceparameter with relative paths — absolute paths break when other people clone your repo.Always add descriptions to variables — future you will thank present you.
Provide sensible defaults — if 90% of use cases use
t3.micro, make that the default.Output everything a caller might need — DNS names, ARNs, IDs, URLs.
Version your modules — when you change a module, bump the version so callers upgrade intentionally.
Keep modules focused — one module = one responsibility. Don't create a "everything" module.
The Bottom Line
Modules are how Terraform scales from "my infrastructure" to "our infrastructure."
Before modules, I had 200 lines of copy-pasted code per environment. After modules, I have one module and 20 lines of configuration per environment.
When I need to update the health check path, I change one file — not three. When I need to add a new environment, I copy the calling configuration — not 200 lines of infrastructure.
Modules don't just save time. They save mistakes. And when you're managing production infrastructure, that's worth everything.
P.S. I spent 5 days writing the same infrastructure over and over. Today I spent 2 hours writing it once. The math is clear: modules pay for themselves the second you need a second environment.
Top comments (0)