DEV Community

Dileep
Dileep

Posted on • Edited on

How to manage a multi-region, multi-environment infrastructure on AWS using Terraform

In one of my recent engagements, I had to work out an approach to manage AWS infrastructure across multiple regions, and for various environments, using Terraform.

As any sane copy-paste-tweak developer would, I did "google" for inspiration but ended up finding content that solved partially, either only for multi-environment or multi-region scenarios, or wasn't thought through (for example, no isolation of state between regions). For anyone in a similar need, here's something to build upon.

Prerequisites

An understanding of Terraform and the concepts of Modules, Backends, Workspaces, Remote State & AWS provider would be required to make sense of the content in this post:

The following tools would be required to experiment with the provided sample code :

What do we have to play with?

Module

Terraform's module system helps us create configurable infrastructure templates that could be reused across various environments (product-a deployed to development/production) or across various products (standard s3/DynamoDB/SNS/etc templates)

Backend

Terraform's backend configuration for AWS s3 remote state uses the following configuration variables to organize infrastructure state:

  • bucket: name of the s3 bucket where state would be stored
  • workspace_key_prefix: custom prefix on state file path
  • workspace: name of the workspace
  • key: state file name

In s3, state file could then be located at <bucket>/<workspace_key_prefix>/<workspace>/<key>. If we substitute workspace with ap-southeast-1 or ap-southeast-2, if we substitute the variables workspace_key_prefix with product-a and key with terraform.tfstate, we end up with state files stored as:

bucket
    └──product-a
        ├──ap-southeast-1
        │   └── terraform.tfstate
        └──ap-southeast-2
            └── terraform.tfstate

This sets up grouping infrastructure states at a product/project level while establishing isolation between deployments to different regions while storing all those states conveniently in one place.

Approach

Using the terraform module and backend systems, the infrastructure-as-source code repository layout & Terraform backend configuration snippet described in the section provides us with a way to:

  • establish a structure in which common or a product/project's infrastructure is templatised for reuse across various enviroments
  • fine tune product/project's infrastructure at an environment level while even adding environment specific infrastructure for those non-ideal cases
  • maintain state at a region level so that we could have better isolation, canary deploy, etc.,

Source Layout

├── environments
│   ├── development
│   |   ├── ap-southeast-1.tfvars
│   |   ├── ap-southeast-2.tfvars
│   |   ├── variables.tf
│   |   ├── main.tf
│   |   ├── provider.tf
│   |   ├── terraform.tf
│   |   └── terraform.tfvars
│   ├── test
│   |   ├── ap-southeast-1.tfvars
│   |   ├── ap-southeast-2.tfvars
│   |   └── ...
│   ├── stage
│   |   ├── ap-southeast-1.tfvars
│   |   └── ...
│   └── production
│   |   └── ...
└── modules
    ├── aws-s3
    │   ├── main.tf
    │   ├── provider.tf
    │   └── variables.tf
    ├── product-a
    │   ├── main.tf
    │   ├── provider.tf
    │   └── variables.tf
    └── sub-system-x
        ├── main.tf
        ├── provider.tf
        └── variables.tf
  • environments: folder to isolate various environment (development/test/stage/production) specific configuration. This also helps with flexibility of maintaining environment specific infrastructure for those, common, non-ideal scenarios.
  • modules: folder to host reusable resource sets grouped at product/project or at a sub-system or common infrastructure components level. This folder doesn't have to exist in the same repository - it does here as an example and might very well serve the purpose of more than handful of usecases.

Region specific configurations are managed through their respective <workspace>.tfvars file. For example, environments/development/ap-southeast-2.tfvars file for ap-southeast-2 region in development environment.

Also, terraform.tfvars file found inside development/test/stage/production folder under environments could be used to set common configuration for a given environment, across all regions.

Backend Configuration

terraform {
  required_version = "~> 0.12.6"

  backend "s3" {
    bucket               = "terraform-state-bucket"
    dynamodb_table       = "terraform-state-lock-table"
    encrypt              = true
    key                  = "terraform.tfstate"
    region               = "ap-southeast-2"
    workspace_key_prefix = "product-a"
  }
}

Note : The configuration described in this post and the included sample presume state bucket per environment. But, if the need is to store state from all environments in a common bucket, we could update workspace_key_prefix value to include environment in it. For example, with product-a/development or product-a/production, we end up with state under following path in s3:

bucket
    └──product-a
        ├──development
        │   ├──ap-southeast-1
        │   │   └── terraform.tfstate
        │   └──ap-southeast-2
        │       └── terraform.tfstate
        └──production
            ├──ap-southeast-1
            │   └── terraform.tfstate
            └──ap-southeast-2
                └── terraform.tfstate

Repository

Source for a sample setup could be found at here.

Working with the setup

Navigate to the environment folder, development for example, on the terminal.
Note: Working configuration to access AWS environment is presumed

Initialize terraform

To get started, first initialize your local terraform state information

terraform init

List out the available workspaces

terraform workspace list

Create a new workspace (if it doesn't exist already)

#terraform workspace new <workspace-name>
terraform workspace new ap-southeast-2

Select a workspace

#terraform workspace select <workspace-name>
terraform workspace select ap-southeast-2

Plan & apply changes

#terraform plan -var-file=<workspace-name>.tfvars
#terraform apply -var-file=<workspace-name>.tfvars

terraform plan -var-file=ap-southeast-2.tfvars
terraform apply -var-file=ap-southeast-2.tfvars

Repeat for other regions

For ap-southeast-1 region:

terraform workspace new ap-southeast-1
terraform workspace select ap-southeast-1 
terraform plan -var-file=ap-southeast-1.tfvars
terraform apply -var-file=ap-southeast-1.tfvars

Hopefully, this note helps a mate out!

Top comments (3)

Collapse
 
prphilip profile image
prphilip

very good information. So one dynamodb will handle is both regions?

Collapse
 
sdileep profile image
Dileep • Edited

The region of the terraform backend (bucket & dynamodb, inlined in terraform.tf) is different to region(region specific .tfvars file) where a product/project infra would be deployed. In short, yes.

Collapse
 
gtmtech profile image
Geoff Meakin

It's a good writeup, but doesn't answer the tricky question I am most interested in - Should different region resources be stored in terraform state in the same region as the resources.

A possible problem with the solution above is that if the region hosting the terraform state bucket goes down, no terraforming is possible in any region until the region outage is fixed.

I like to try and use a us-west-2 bucket for us-west-2 resources so that then if us-east-2 goes down, i can still terraform us-west-2 etc.

This presents other tricky questions however around cross-region resources as well as global resources and where to store them