Day 9 of the 30-Day Terraform Challenge — and today I learned the hard-won lessons that separate "I know how to write a module" from "I can safely share modules with a team."
Yesterday I built my first module. Today I learned why modules break in production, how to version them like real software, and why pinning versions is the difference between "it works" and "it works every time, for everyone."
The Problem: Modules Aren't Magic
Yesterday's module worked perfectly when I called it from a local path. But the moment I tried to share it? Things got messy.
Three gotchas caught me off guard:
Gotcha 1: File Paths Lie to You
I had a user data script in my module:
user_data = file("user-data.sh")
Worked fine when testing locally. Then I called the module from a different directory:
Error: Error reading file "user-data.sh": no such file or directory
The problem: file() resolves paths relative to where Terraform is run, not relative to the module!
The fix: Always use ${path.module}:
user_data = file("${path.module}/user-data.sh")
Now the path is always correct, no matter who calls the module or from where.
Gotcha 2: Inline Blocks vs Separate Resources
My security group had inline ingress rules:
resource "aws_security_group" "instance" {
ingress {
from_port = 80
to_port = 80
}
}
This worked fine. But when someone using my module wanted to add another rule? They couldn't. The module controlled everything.
The fix: Use separate security group rule resources:
resource "aws_security_group" "instance" {
# No inline rules!
}
resource "aws_security_group_rule" "allow_http" {
security_group_id = aws_security_group.instance.id
from_port = 80
to_port = 80
}
Now callers can add their own rules without modifying the module. The module provides a foundation; they add the customization.
Gotcha 3: Module Outputs Create Hidden Dependencies
I had a resource that needed to wait for the module:
resource "aws_instance" "monitoring" {
depends_on = [module.webserver_cluster]
}
Looks fine, right? Wrong.
This depends_on creates a dependency on every resource inside the module. If any resource in the module changes, Terraform recreates my monitoring instance — even if it wasn't related.
The fix: Depend on specific outputs:
resource "aws_instance" "monitoring" {
depends_on = [module.webserver_cluster.alb_dns_name]
}
Now my monitoring instance only cares if the ALB DNS changes — not if a tag on a random instance changes.
The Solution: Version Your Modules
Once I fixed the gotchas, I needed to share my module safely. The answer: versioning.
Step 1: Push to GitHub
git init
git add .
git commit -m "Initial module release"
git remote add origin https://github.com/123Origami/terraform-aws-webserver-cluster.git
git push origin main
Step 2: Tag a Version
git tag -a "v0.0.1" -m "First release: Basic web server cluster"
git push origin main --tags
Now my module is versioned! Anyone can use it with:
module "webserver_cluster" {
source = "github.com/123Origami/terraform-aws-webserver-cluster?ref=v0.0.1"
}
Step 3: Make a Change, Tag New Version
I added a new feature (custom_user_data), committed it, and:
git tag -a "v0.0.2" -m "Added custom user data support"
git push origin main --tags
Now I have two versions:
- v0.0.1 — stable, production-ready
- v0.0.2 — new feature, needs testing
The Pattern: Dev Tests, Production Pins
This is where it gets good.
Dev environment uses v0.0.2:
# live/dev/services/webserver-cluster/main.tf
module "webserver_cluster" {
source = "github.com/123Origami/terraform-aws-webserver-cluster?ref=v0.0.2"
cluster_name = "webservers-dev"
instance_type = "t3.micro"
custom_user_data = file("${path.module}/dev-setup.sh")
}
Production stays on v0.0.1:
# live/production/services/webserver-cluster/main.tf
module "webserver_cluster" {
source = "github.com/123Origami/terraform-aws-webserver-cluster?ref=v0.0.1"
cluster_name = "webservers-production"
instance_type = "t3.medium"
}
Why this pattern:
- Dev tests the new version immediately
- Production stays stable until I validate v0.0.2 in dev
- When I'm confident, I update production to v0.0.2
- If something breaks, I roll back to v0.0.1 in seconds
The terraform init Magic
When I run terraform init in dev:
Initializing modules...
Downloading git::https://github.com/123Origami/terraform-aws-webserver-cluster.git?ref=v0.0.2
Production downloads v0.0.1:
Initializing modules...
Downloading git::https://github.com/123Origami/terraform-aws-webserver-cluster.git?ref=v0.0.1
Same module. Different versions. Different environments. This is how real teams work.
Why Version Pinning Is Non-Negotiable
Imagine this nightmare:
Engineer A runs terraform apply at 9:00 AM — downloads module at v0.0.1, everything works.
Engineer B runs terraform apply at 10:00 AM — module source has been updated to v0.0.2 (new feature, breaking change). Infrastructure now inconsistent.
No one knows why. "It worked on my machine" becomes "it worked in my apply."
Without version pinning, you're gambling.
The Module README: Your Contract with Users
Every shared module needs a README. It's not optional. Here's what I included:
# AWS Web Server Cluster Module
## What it does
Creates a highly available web server cluster with ALB and ASG.
## Usage
module "webserver_cluster" {
source = "github.com/your-username/terraform-aws-webserver-cluster?ref=v0.0.1"
cluster_name = "my-app"
}
## Inputs
| Name | Type | Default | Required |
|------|------|---------|----------|
| cluster_name | string | - | yes |
| instance_type | string | t3.micro | no |
| min_size | number | 1 | no |
## Outputs
| Name | Description |
|------|-------------|
| alb_url | URL to access the cluster |
| asg_name | Name of the Auto Scaling Group |
## Known Gotchas
- Use `${path.module}` for any file paths inside the module
- Security group rules are separate resources for flexibility
A good README means users don't have to read your code to use it.
What I Learned
The gotchas are subtle but deadly. File paths, dependency chains, and inline resources — all of them work fine until they don't. Fix them once, in the module, and everyone benefits.
Versioning is how modules become trustworthy. Without a version pin, you're sharing a moving target. With versioning, you're sharing a contract.
The dev-test-prod pattern is simple but powerful. Test new versions in dev. Let them bake. When they're stable, roll to staging, then production. If something breaks, roll back one version.
The Bottom Line
A module without versioning is just a folder of code. A module with versioning is a tool your whole team can rely on.
Today I turned my web server module from a personal convenience into something I could share with anyone, anywhere, with confidence.
Version your modules. Pin your versions. Test in dev, trust in prod.
Resources:
Top comments (0)