DEV Community

Stephanie Makori
Stephanie Makori

Posted on

Building a 3-Tier Multi-Region High Availability Architecture with Terraform

High availability is one of the most important goals when designing cloud infrastructure. In a production environment, deploying resources in a single region is not enough because a regional outage can make the entire application unavailable. To solve this, I built a 3-tier multi-region high availability architecture on AWS using Terraform, designed to remain available even if one AWS region fails.

This infrastructure consists of five reusable Terraform modules that provision networking, load balancing, compute, database, and DNS failover resources across two AWS regions. The result is a resilient architecture where traffic automatically shifts to a secondary region if the primary region becomes unhealthy.

Architecture Overview

The infrastructure follows a standard 3-tier architecture:

  1. Presentation Tier

    Route53 directs traffic to an Application Load Balancer (ALB) in the active region.

  2. Application Tier

    EC2 instances are managed by an Auto Scaling Group (ASG) across multiple Availability Zones.

  3. Database Tier

    Amazon RDS runs in Multi-AZ mode in the primary region with a cross-region read replica in the secondary region.

Traffic flows like this:

Route53 → ALB → EC2 Auto Scaling Group → RDS

This design ensures redundancy at every layer. If one Availability Zone fails, traffic is served from another AZ. If the primary region fails, Route53 automatically redirects traffic to the secondary region.

Why Use Five Terraform Modules?

To keep the infrastructure maintainable and reusable, I split the deployment into five Terraform modules:

  • VPC Module provisions networking resources such as VPCs, public/private subnets, route tables, internet gateways, and NAT gateways.
  • ALB Module provisions the Application Load Balancer, listeners, target groups, and ALB security groups.
  • ASG Module provisions launch templates, EC2 instances, scaling policies, alarms, and instance security groups.
  • RDS Module provisions the Multi-AZ primary database and cross-region replica.
  • Route53 Module provisions health checks and failover DNS records.

Using modules avoids duplicating code and allows each infrastructure component to manage a single responsibility. This also makes troubleshooting easier because changes can be isolated to one module without affecting the others.

Data Flow Between Modules

One of the biggest advantages of modular Terraform is how outputs from one module become inputs to another.

For example, the ALB module creates a target group and exports its ARN. That ARN is then passed to the ASG module so the EC2 instances register with the load balancer target group.

Similarly, the RDS primary database module exports its database ARN, which is passed into the secondary region RDS module as the replication source. This creates a cross-region read replica.

This flow creates a dependency chain:

VPC outputs → ALB inputs → ASG inputs → RDS inputs → Route53 inputs

This keeps the root Terraform configuration clean while each module handles its own internal complexity.

Route53 Failover in Action

A major feature of this architecture is automatic DNS failover using Route53 health checks.

Route53 continuously checks the health of the primary region ALB endpoint. If the primary region fails health checks, Route53 marks it unhealthy and redirects DNS traffic to the ALB in the secondary region.

The failover process works like this:

  1. Route53 detects the failed health check in the primary region
  2. DNS failover policy marks the primary record unhealthy
  3. Traffic is routed to the secondary ALB
  4. Users continue accessing the application with minimal downtime

This failover typically takes about 1 to 2 minutes, depending on DNS TTL and health check intervals.

This approach provides automatic disaster recovery without manual intervention.

Multi-AZ vs Cross-Region Replication

The database layer uses both Multi-AZ and cross-region replication, but they serve different purposes.

Multi-AZ

Multi-AZ creates a standby database in another Availability Zone within the same region. If the primary database instance fails, AWS automatically promotes the standby.

This protects against:

  • AZ failures
  • hardware failures
  • maintenance downtime

Cross-Region Read Replica

Cross-region replication copies data asynchronously to another AWS region.

This protects against:

  • regional outages
  • disaster recovery scenarios
  • geographic redundancy needs

Together, these strategies provide both high availability and regional resilience.

Benefits of This Architecture

This deployment provided several important benefits:

  • High availability through Multi-AZ EC2 and RDS deployment
  • Disaster recovery through cross-region redundancy
  • Automatic failover using Route53 health checks
  • Scalability through Auto Scaling Groups
  • Reusability through modular Terraform design
  • Consistency through infrastructure as code

Instead of manually configuring resources in AWS, Terraform made it possible to define the entire infrastructure in reusable modules and deploy it consistently across regions.

Final Thoughts

This project was an excellent demonstration of how Terraform modules can be combined to build a production-style multi-region high availability architecture on AWS.

By separating the infrastructure into reusable modules and wiring them together with outputs and inputs, I created an environment that is scalable, fault tolerant, and easy to manage.

The most valuable takeaway from this project was understanding how Route53 failover, Auto Scaling Groups, Multi-AZ RDS, and cross-region replicas work together to provide resilience at every layer of the application stack.

This is the kind of architecture that forms the foundation for real-world production systems where uptime and fault tolerance are critical.

Top comments (0)