DEV Community

Mary Mutua
Mary Mutua

Posted on

Managing High Traffic Applications with AWS Elastic Load Balancer and Terraform

Day 5 of my Terraform journey focused on two important ideas:

  1. how to place an AWS Application Load Balancer (ALB) in front of an Auto Scaling Group (ASG)
  2. how Terraform state works behind the scenes, including what happens when state and real infrastructure drift apart

This was one of the most useful days so far because it connected architecture, Terraform behavior, and real-world operational thinking.

What I Built

I extended my earlier clustered deployment by building a load-balanced web application on AWS using Terraform.

The deployment included:

  • an internet-facing Application Load Balancer
  • a listener on port 80
  • a target group with HTTP health checks
  • an Auto Scaling Group attached to that target group
  • security groups that allowed inbound HTTP to the ALB while restricting direct access to the EC2 instances
  • outputs that exposed the ALB DNS name

After deployment, I opened the ALB DNS name in the browser and confirmed it returned my web page successfully.

How the ALB and ASG Work Together

The easiest way to understand the architecture is this request flow:

Browser -> ALB -> Target Group -> EC2 instances in Auto Scaling Group

Enter fullscreen mode Exit fullscreen mode

Here is what each piece does:

ALB

The public entry point users connect to

Listener

Accepts traffic on port 80 and forwards it

Target Group

Tracks the backend EC2 instances and performs health checks

Launch Template

Defines how instances are created

ASG

Maintains the desired number of EC2 instances and replaces unhealthy ones

This matters because users no longer connect directly to one instance. Instead, traffic is distributed across healthy instances, which improves resilience and availability.

Why This Matters for High Traffic Application

A single server can only do so much:

  • it becomes a single point of failure
  • it cannot scale easily
  • if it crashes, the app goes down

With an ALB and ASG:

  • traffic is distributed
  • unhealthy instances can be removed from rotation
  • the ASG can launch replacement instances
  • the architecture becomes more production-like

That is the key shift from “just running a server” to “designing resilient infrastructure.”

Terraform State: What It Is Really Doing

Terraform state is the record of what Terraform believes it manages.

It contains mappings between:

  • your Terraform resource definitions
  • the actual real-world infrastructure in AWS

This is why Terraform can run plan and tell you what will change. It compares:

  • your code
  • the state file
  • the real infrastructure from the provider

That also makes the state file the source of truth for Terraform operations.

What I Observed in the State Experiments

To understand Terraform state better, I tried two experiments.

Experiment 1: Manual state tampering

I manually edited a value in terraform.tfstate.

At first, a normal terraform plan did not show the effect clearly because Terraform refreshed from AWS and corrected its view of the real infrastructure.

When I used:
terraform plan -refresh=false

Terraform compared the configuration against the tampered state and proposed changes based on the incorrect state.

That showed me something important:

  • manual state editing is risky
  • incorrect state can make Terraform propose unnecessary or dangerous changes
  • state files should not be edited by hand

Experiment 2: State drift

For the second experiment, I manually changed the Environment tag on the ALB in the AWS Console from:

dev

to:

manual-change

Then I ran:
terraform plan

Terraform detected the drift and proposed an in-place update to restore the tag back to dev.

That was a very practical demonstration of drift:

  • the real infrastructure changed
  • the Terraform code did not
  • Terraform detected the difference and proposed reconciliation

Best Practices for Managing Terraform State Files

Today also made it much clearer why Terraform state should be handled carefully.

Never commit state files to Git

State files can contain:

  • resource IDs
  • infrastructure mappings
  • outputs
  • sometimes sensitive values

If state is committed to Git:

  • sensitive data may be exposed
  • teams can overwrite one another’s changes
  • you lose proper coordination around shared infrastructure

What remote backends are

A remote backend is a shared location where Terraform stores state instead of keeping it only on your local machine.

On AWS, a common pattern is:

  • Amazon S3 for storing the state file
  • DynamoDB for state locking

Why state locking matters

Without locking, two people could run terraform apply at the same time and corrupt or overwrite state updates.

State locking prevents concurrent modification and makes Terraform safer in team environments.

Code and Block Reference

Instead of pasting all of the Terraform code here, I’ve documented the full implementation in my GitHub repository, including:

  • the ALB setup
  • the ASG integration
  • the supporting Terraform files
  • the block comparison table
  • the README explanations for each lab

GitHub:
👉 Github Link

Final Thoughts

Day 5 helped me understand two major ideas much better:

  • how to handle higher-traffic applications with an ALB and ASG
  • how Terraform state helps Terraform track, compare, and reconcile infrastructure

The ALB and ASG side helped me understand resilient architecture.

The state experiments helped me understand why:

  • state matters
  • drift detection matters
  • manual state edits are risky
  • remote backends and locking are important

This made Day 5 feel much closer to real-world infrastructure work than just writing Terraform syntax.

Follow My Journey

This is Day 5 of my 30-Day Terraform Challenge.

See you on Day 6 🚀

Top comments (0)