DEV Community

Cover image for Building a Production-Ready Terraform Project with Multi-Environment Support on AWS
Anthony Uketui
Anthony Uketui

Posted on

Building a Production-Ready Terraform Project with Multi-Environment Support on AWS

When I started this project, my goal was simple but ambitious. I wanted to build a production-ready infrastructure that could automatically scale based on demand. I also wanted to organize everything in a way that supports multiple environments like dev and prod.

I’ll walk you through my full process, including the reasoning behind each step, challenges I faced, and how I resolved them.

I'd also share snippets of my code

But, here's the code if you want to follow along: https://tinyurl.com/aedduwdm

Step 1: Project Structure and Multi-Environment Setup

I wanted to keep my project modular and clean. To achieve this, I structured my directory in this way:

  • Modules contain reusable building blocks like VPC, ALB, EC2, RDS, and Monitoring.
  • Environments (dev and prod) each have their own main.tf and terraform.tfvars files that reference the modules.

This structure makes it easy to manage different configurations without duplicating code.


Step 2: Defining the VPC

My first step was to create a dedicated VPC with public and private subnets. Public subnets are for load balancers, and private subnets are for EC2 and RDS.

Code Snippet (VPC module)

resource "aws_vpc" "this" {
  cidr_block = var.vpc_cidr
  tags       = { Name = "${var.env}-vpc" }
}
Enter fullscreen mode Exit fullscreen mode


`

I chose to keep the database and application instances in private subnets for security reasons.


Step 3: Application Load Balancer (ALB)

The ALB distributes traffic across EC2 instances. This ensures high availability.

Code Snippet (ALB module)

`

resource "aws_lb" "this" {
  name               = "${var.env}-alb"
  load_balancer_type = "application"
  subnets            = var.public_subnets
}
Enter fullscreen mode Exit fullscreen mode


`

By using public subnets, the ALB became internet-facing, while the EC2 instances behind it stayed private.


Step 4: EC2 Auto Scaling Group

Next, I set up EC2 instances with auto scaling. This way, when CPU usage is high, new instances are launched automatically.

Code Snippet (EC2 module)

`

resource "aws_autoscaling_group" "this" {
  desired_capacity     = var.desired_capacity
  max_size             = var.max_size
  min_size             = var.min_size
  vpc_zone_identifier  = var.private_subnets
}
Enter fullscreen mode Exit fullscreen mode


`

I used Amazon Linux 2 AMI and passed in user data through a template to bootstrap the instances.


Step 5: RDS Database Setup

For the database, I used Amazon RDS with MySQL. One challenge here was how to handle credentials securely.

At first, I hardcoded them, but I quickly realized that wasn’t safe. I switched to AWS Secrets Manager combined with a random password generator.

Code Snippet (RDS module)

`

resource "random_password" "db_password" {
  length  = 16
  special = true
}

resource "aws_secretsmanager_secret" "db_pwd" {
  name = "${var.env}/db_password"
}
Enter fullscreen mode Exit fullscreen mode


`

This allowed me to store the password securely and avoid exposing it in my code.


Step 6: Monitoring and Alerts

I didn’t want to stop at just provisioning resources. I also wanted monitoring and alerts for key components like EC2, ALB, and RDS.

I used CloudWatch Alarms with SNS topics to send email alerts.

Code Snippet (Monitoring module)

`

resource "aws_cloudwatch_metric_alarm" "cpu_high" {
  alarm_name          = "${var.env}-cpu-high"
  metric_name         = "CPUUtilization"
  threshold           = 70
  comparison_operator = "GreaterThanThreshold"
  namespace           = "AWS/EC2"
  statistic           = "Average"
  period              = 300
  evaluation_periods  = 2
  alarm_actions       = [aws_sns_topic.alerts.arn]
}
Enter fullscreen mode Exit fullscreen mode


`

This way, I get notified if something goes wrong.


Step 7: Multi-Environment Variables

Each environment has its own terraform.tfvars file. For example, my dev environment uses smaller instances while prod uses larger ones.

Dev terraform.tfvars

`

environment = "dev"
region      = "us-east-1"
instance_type = "t3.micro"
desired_capacity = 1
db_instance_class = "db.t3.micro"
Enter fullscreen mode Exit fullscreen mode


`

Prod terraform.tfvars

`

environment = "prod"
region      = "us-east-1"
instance_type = "t3.medium"
desired_capacity = 2
db_instance_class = "db.t3.medium"
Enter fullscreen mode Exit fullscreen mode


`

This separation made it easy to spin up lightweight dev infrastructure without affecting production.

Dev Environment


Prod Environment



Step 8: Running the Project

Here are the commands I used to apply everything:

`

cd envs/dev
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"

cd ../prod
terraform init
terraform plan -var-file="terraform.tfvars"
terraform apply -var-file="terraform.tfvars"
Enter fullscreen mode Exit fullscreen mode


`

This workflow gave me a smooth way to deploy separate environments using the same modules.


Challenges and How I Solved Them

  • Hardcoding DB credentials: Initially I put the password directly in code. I later switched to using Secrets Manager with random_password to improve security.

  • Auto scaling not attaching correctly to ALB: At first, my instances weren’t registering as healthy. The fix was to ensure the security groups allowed the right ports and the target group health checks matched the app.

  • Multi-environment management: I struggled with duplication until I split logic into modules and kept only environment-specific variables in tfvars.


Conclusion

This project gave me a hands-on understanding of how to structure a Terraform project for scalability, security, and maintainability. By modularizing infrastructure, securing credentials, and setting up monitoring, I ended up with a production-grade setup.

Top comments (2)

Collapse
 
sofia__petrova profile image
Sofia Petrova

Great walkthrough—this modular, multi-env setup mirrors the industry shift toward platform engineering “paved paths” and GitOps-first IaC. Curious: are you planning to add policy-as-code (OPA/Sentinel) and a PR-driven plan/apply with drift and cost guardrails via workspaces or OpenTofu/Terraform Cloud? Would love to see how that evolves alongside your monitoring stack.

Collapse
 
tony_uketui_6cca68c7eba02 profile image
Anthony Uketui

Yes! Thanks Sofia🙌