Day 26 of my Terraform journey moved from static hosting to dynamic compute.
Yesterday, I deployed a static website on S3. Today, I built a scalable web application stack on AWS using:
- EC2 Launch Template
- Application Load Balancer
- Auto Scaling Group
- CloudWatch alarms
- scaling policies
- reusable Terraform modules
- remote state with S3 and DynamoDB
GitHub reference:
π https://github.com/mary20205090/30-day-Terraform-Challenge/tree/main/day_26
Project Structure
For Day 26, I separated the infrastructure into three focused modules:
day26-scalable-web-app/
βββ modules/
β βββ ec2/
β βββ alb/
β βββ asg/
βββ envs/
β βββ dev/
βββ bootstrap/
βββ backend.tf
βββ provider.tf
The goal was not just to make the app work.
The goal was to make the design reusable, understandable, and safe to change.
Why Three Modules Instead of One?
I split the project into three modules because each part has a different responsibility.
The ec2 module owns the compute template:
- Launch Template
- instance security group
- user data script
The alb module owns traffic entry:
- Application Load Balancer
- target group
- listener
- ALB security group
The asg module owns scaling:
- Auto Scaling Group
- scaling policies
- CloudWatch CPU alarms
- dashboard
If everything lived in one large file, it would still work, but it would be harder to reuse and harder to reason about.
Modules make the boundaries clear.
How the Modules Connect
The most important part of today was understanding the data flow between modules.
The EC2 module creates the launch template:
module.ec2.launch_template_id
module.ec2.launch_template_version
Those outputs flow into the ASG module:
launch_template_id = module.ec2.launch_template_id
launch_template_version = module.ec2.launch_template_version
That tells the Auto Scaling Group what kind of EC2 instances to launch.
Then the ALB module creates a target group:
module.alb.target_group_arn
That output flows into the ASG module too:
target_group_arns = [module.alb.target_group_arn]
This closes the loop:
EC2 Launch Template β ASG β ALB Target Group β ALB DNS
The ASG creates instances from the launch template, then registers those instances behind the load balancer target group.
Deployment Output
After applying the Terraform plan, Terraform returned the ALB DNS name:
day26-web-alb-dev-400577037.us-east-1.elb.amazonaws.com
When I opened it in the browser, the app responded:
Deployed with Terraform - Day 26
Environment: dev
Served by an Auto Scaling Group behind an Application Load Balancer.
ASG healthy instances screenshot:
[Paste AWS Console screenshot here showing the Auto Scaling Group with healthy instances]
Why health_check_type = "ELB" Matters
One important setting today was:
health_check_type = "ELB"
This tells the Auto Scaling Group to use load balancer health checks, not only EC2 instance status checks.
That matters because an EC2 instance can be βrunningβ but still not serving the application correctly.
With ELB health checks enabled, the ASG checks whether the instance is healthy from the load balancerβs point of view. If the app fails behind the ALB, the ASG can replace the instance.
This is much closer to real production behavior.
What Happens When CPU Exceeds 70%
The ASG module includes CloudWatch alarms and scaling policies.
When average CPU goes above 70%, this happens:
- CloudWatch alarm enters
ALARMstate. - The alarm triggers the scale-out policy.
- The scale-out policy increases ASG capacity by 1.
- The ASG launches a new EC2 instance using the Launch Template.
- The new instance registers with the ALB target group.
- The ALB starts sending traffic to the new healthy instance.
That is the feedback loop:
CPU spike β CloudWatch alarm β scaling policy β new EC2 instance β ALB target group
There is also a scale-in policy for low CPU, so the system can reduce capacity when traffic drops.
Remote State
Like previous days, I used a remote backend with:
- S3 for Terraform state
- DynamoDB for state locking
This prevents local-only state problems and protects against two people applying changes at the same time.
Remote state is one of those things that feels small at first, but it becomes critical as soon as infrastructure work becomes collaborative.
A Useful Debugging Lesson
I hit a Terraform planning issue around for_each.
The lesson was simple but important:
for_each keys must be known during planning.
If Terraform cannot know the keys until apply time, it cannot build the dependency graph safely. The fix was to use stable keys and put dynamic values inside the map values instead.
That was a good reminder that Terraform is very strict about what must be known at plan time.
Cleanup
After verifying the app worked, I destroyed both:
- the dev application stack
- the bootstrap backend resources
This matters because ALBs and EC2 instances can keep generating cost even after the learning task is complete.
Final Takeaway
Day 26 helped me connect several Terraform lessons into one practical system.
A scalable web application is not just EC2. It is the relationship between compute, networking, health checks, monitoring, scaling policies, and state management.
The biggest lesson:
Terraform modules are not just for organizing files. They help define the boundaries of responsibility in infrastructure.
That is what makes the system easier to understand, reuse, and safely change.
Follow My Journey
This is Day 26 of my 30-Day Terraform Challenge.
See you on Day 27.
Top comments (0)