Introduction
A lot of Terraform projects reach a point where things work — an EC2 launches, a load balancer responds, and terraform apply finishes without errors.
I reached that point too.
But after spending more time with Terraform, I realized something uncomfortable: working infrastructure doesn't necessarily mean well-designed infrastructure.
So instead of moving on, I decided to stop and rebuild the same project again — this time focusing on structure, clarity, and how the code would behave if it had to grow or be maintained.
This post is a walkthrough of that process: starting from a non-modular Terraform setup and gradually refactoring it into a modular one, while dealing with the confusion, mistakes, and "aha" moments along the way.
What I Set Out to Build
The goal was simple in terms of resources, but intentional in design:
- Multiple EC2 instances running Nginx
- An Application Load Balancer distributing traffic
- Separate security groups for ALB and EC2
- Dynamic target group registration
- Terraform remote state stored in S3
- State locking using DynamoDB
- A fully modular Terraform structure
Nothing exotic — just common AWS building blocks — but wired together in a way that reflects how Terraform is actually used beyond tutorials.
Why I Did Not Start with Modules
At first, I had a non-modular version of this project.
Everything was in one place.
Resources referenced each other directly.
It worked.
But that version taught me how Terraform executes, not how Terraform should be structured.
Before modularizing, I wanted to clearly understand:
- How
countandfor_eachreally behave - Why
count.indexcan cause problems later - How Terraform decides resource identity
- What happens when you change inputs after resources already exist
- How state is affected when multiple resources depend on each other
Only after seeing those problems firsthand did modularization start to make sense.
The First Real Shift: Stop Using count
One of the biggest changes I made was moving away from count and using for_each everywhere.
Instead of creating instances like "instance 0, 1, 2", I switched to maps like:
instance-1instance-2instance-3
This immediately made things clearer.
With for_each:
- Resource names stay stable
- Outputs are predictable
- Wiring resources together becomes much easier
- You stop relying on numeric positions and start relying on intent
Once this clicked, the rest of the project design became much cleaner.
How Instance Creation is Handled
In the root module, I generate a map that looks like:
instance-name → subnet-id
Subnets are chosen dynamically using modulo logic so instances are spread across availability zones.
That map is passed into the EC2 module.
Inside the EC2 module:
-
for_eachiterates over the map - Each key becomes the instance
Nametag - Each value becomes the
subnet_id
This keeps responsibility clear:
- The root module decides what should exist
- The EC2 module decides how instances are created
That separation turned out to be very important later.
EC2 Module Design
The EC2 module does only one job: create EC2 instances.
It does not decide:
- How many instances exist
- Which subnets to use
- How traffic reaches them
Inputs include:
- AMI
- Instance type
- Key name
- Security group IDs
- A map of instance names to subnet IDs
- Optional user data
Outputs return maps:
- Instance IDs
- Private IPs
- ARNs
Returning maps instead of lists keeps instance identity intact when passing data to other modules.
Security Groups: Keeping Things Isolated
Instead of putting everything into one security group, I created:
- One security group for the ALB
- One security group for the EC2 instances
The ALB security group:
- Allows inbound HTTP from the internet
The EC2 security group:
- Allows inbound traffic only from the ALB security group
- Allows SSH only from a restricted CIDR
This setup drastically reduces exposure and makes traffic flow explicit instead of implicit.
The security group module accepts ingress and egress rules as maps of objects, which made it flexible without being complicated.
Load Balancer and Target Registration
The load balancer module handles:
- ALB creation
- Target group creation
- Listener configuration
- Target group attachments
The important part is that the ALB module does not care how EC2 instances are created.
It simply accepts a map of instance IDs.
Inside the module, it loops over that map and attaches each instance to the target group dynamically.
No hardcoded references.
No assumptions.
Just clean inputs and outputs.
Remote State and Locking
Terraform state is stored remotely in S3, with DynamoDB used for state locking.
I intentionally included this even though I was working alone.
Why?
Because this is where Terraform usage changes completely.
Remote state with locking:
- Prevents concurrent applies
- Prevents accidental corruption
- Forces you to think about Terraform as a shared system
Once you use this setup, going back to local state feels wrong.
User Data and Verification
Each EC2 instance runs a simple user data script that installs Nginx and serves a response identifying the instance.
This made it easy to verify:
-
user_dataexecution - Instance uniqueness
- Load balancer distribution
Seeing traffic rotate across instances confirmed that everything was wired correctly.
Challenges I Ran Into
Some things that took time to understand:
| Challenge | Description |
|---|---|
| Indexing errors | Why indexing errors happen with data sources |
| count breaks identity | Why count breaks identity when things change |
| Module outputs | Why module outputs should usually preserve structure |
| Security group references | How security group references differ from CIDR rules |
| User data behavior | Why user data doesn't rerun unless instances are replaced |
| DynamoDB locking | How DynamoDB locking behaves during apply |
Each issue forced me to slow down and actually read what Terraform was doing instead of guessing.
What This Project Represents for Me
This project wasn't about adding more AWS services.
It was about:
- Writing Terraform that is readable
- Making dependencies explicit
- Reducing assumptions
- Designing for change instead of just "apply success"
The biggest shift wasn't technical — it was mental.
I stopped asking:
"Does this work?"
And started asking:
"Does this make sense if I come back in three months?"
What's Next
Some natural extensions to this setup:
- Auto Scaling Groups
- HTTPS with ACM
- Monitoring and alarms
- CI/CD for Terraform
- Environment separation
- ECS or EKS later on
But those only make sense once the foundation is solid.
Final Thought
Terraform feels difficult when you treat it like a scripting tool.
It becomes much clearer when you treat it like a design tool.
Building something twice — once messy, once structured — taught me more than any single tutorial ever could.
If you're learning Terraform and feel stuck, my honest advice is:
Build it once just to make it work.
Then rebuild it to make it right.
That's where the learning actually happens.
Project Repository
GitHub: terraform-project2-moudlarized
Connect with Me
If you found this helpful, consider giving the repository a star!
Top comments (0)