DEV Community

Cover image for Separate AWS Accounts for Dev, Staging, and Production in a Terraform Multi-Workspace Environment
PH Saurav
PH Saurav

Posted on

Separate AWS Accounts for Dev, Staging, and Production in a Terraform Multi-Workspace Environment

Managing multiple shadow cloud architectures across development, staging, and production environments—where architectures are similar but not identical and deployed in different AWS accounts can be complex and challenging. There are various approaches for handling this depending on specific scenarios and requirements. In a recent project, I faced this exact scenario but found limited resources to guide me through. So, I decided to document my approach and share the solution I developed in this blog. So, others in similar situations can use this as a guide. I hope my research and experimentation save you some time.

Goals and Scenario

  1. Set up Development, Staging, and Production environments with similar architectures but differing resource types and sizes.
  2. Consolidate these environments into a single Terraform project to reduce complexity and avoid redundant configurations.
  3. Deploy each environment in a separate AWS account.
  4. Implement remote state management and enable state locking for consistency and collaboration.
  5. Establish safeguards to prevent critical mistakes and ensure safe deployments.

Plan for Implementation

  1. Use Terraform workspaces and a single remote state source to manage state across environments.
  2. Utilize separate variable files to handle environment-specific differences.
  3. Configure AWS profiles at runtime to manage deployments across different AWS accounts.
  4. Implement an environment validator to prevent critical errors.

Setup Process

1. Setup AWS Credentials:

First, we need to set AWS credentials for our accounts. For detailed instructions, follow this doc Configuration and credential file settings in the AWS CLI. In the AWS credentials file (~/.aws/credentials), add credentials with names similar to this:

[app-dev]
aws_access_key_id = <AWS ACCESS KEY>
aws_secret_access_key = <AWS SECRET KEY>

[app-stage]
aws_access_key_id = <AWS ACCESS KEY>
aws_secret_access_key = <AWS SECRET KEY>

[app-prod]
aws_access_key_id = <AWS ACCESS KEY>
aws_secret_access_key = <AWS SECRET KEY>
aws_session_token= <AWS SESSION KEY>   # IF Needed
Enter fullscreen mode Exit fullscreen mode

Here, the names of the profiles app-dev, app-stage, and app-prod are important. With these, we will dynamically choose an AWS account to deploy.

2. Setup Remote Backend for State Management & Locking

In this approach, we’ll use a single backend to manage the architecture states, leveraging Terraform’s workspace feature to separate the state for each environment. We’ll store the states in an S3 bucket and use DynamoDB for state locking. To set this up, create an S3 bucket and a DynamoDB table with a partition key named LockID in your account for state storage. For a detailed guide, check out this article. In our setup, we’re using the production account to store states, so we created the S3 bucket and DynamoDB table there. Next, let’s configure the backend:

terraform {
  backend "s3" {
    bucket         = "bucket-name"
    encrypt        = true
    key            = "app-name/terraform.tfstate"
    region         = "region"
    dynamodb_table = "dynamodb-table-name"
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, we have to initialize the environment. Before doing that, we need to set the account profile in which we have created the backend as our default profile for this environment. We've set this in the app-prod account, so we will set this as default. It is important to set this. Otherwise, the initialization will fail. Also, you need to run this before a session. Otherwise, you can also set this as your default profile. This way, you don't need to export it in your environment.

export AWS_PROFILE=app-prod
Enter fullscreen mode Exit fullscreen mode

Now run terraform init:

terraform init  #-reconfigure tag to avoid conflict not always necessary
Enter fullscreen mode Exit fullscreen mode

3. Setup Terraform Workspaces

Create workspaces for dev, stage, and prod environments with the workspace new command:

terraform workspace new <workspace-name>
Enter fullscreen mode Exit fullscreen mode

To switch to a different workspace, use the select command:

terraform workspace select dev
Enter fullscreen mode Exit fullscreen mode

4. Base Project Setup

Now, in the main project setup, the AWS provider takes a profile dynamically in runtime from the variable:

terraform {
  required_version = "~> 1.9.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.72.1"
    }
  }
}


# * provider block
provider "aws" {
  profile = var.profile   #Dynamically in runtime
  region  = var.aws_region
}
Enter fullscreen mode Exit fullscreen mode

In variables.tf, we will set the required variables:

# * General 
variable "aws_region" {
  description = "AWS Region to use"
  type        = string
}
variable "profile" {
  description = "AWS profile to use"
  type        = string

}
Enter fullscreen mode Exit fullscreen mode

5. Setup .tfvars for Each Environment:

We’ll manage environment-specific differences using variable files—special files that store key-value pairs for each configuration. By supplying these files at runtime, we can easily apply unique configurations for each environment. Below is an example for dev and prod; any additional environment would follow a similar structure.
Example of dev.tfvars:

# Generic variables
environment = "dev"    #Environment name for validation
project = "app-name"
aws_region = "us-east-1"
profile = "app-dev"    #AWS profile name for this environment

Enter fullscreen mode Exit fullscreen mode

Example of prod.tfvars:

# Generic variables
environment = "prod"    #Environment name for validation
project = "app-name"
aws_region = "us-east-1"
profile = "app-prod"    #AWS profile name for this environment
Enter fullscreen mode Exit fullscreen mode

Now, to use a specific .tfvars variable file for an environment, we can use the -var-file flag.

For example, for a dev environment, we can use this command to create a plan:

terraform workspace select dev && terraform plan -var-file="dev.tfvars"
Enter fullscreen mode Exit fullscreen mode

Similarly, to apply changes to the prod environment, we can use the command like this:

terraform workspace select prod && terraform apply -var-file="prod.tfvars"
Enter fullscreen mode Exit fullscreen mode

6. Safeguard to Prevent Catastrophe

Looking at the current structure, there’s a significant risk: a developer could accidentally apply the variable file for one environment to another, which could be disastrous—especially for production. To mitigate this, we can implement a validator that verifies the match between the variable file and the current workspace, ensuring the correct tfvars file is used for each workspace.
In our variables.tf we will add this variable and validator:

variable "environment" {
  description = "Environment Name"
  type        = string

  validation {
    condition     = var.environment == terraform.workspace
    error_message = "Workspace & Variable File Inconsistency!! Please Double-check!"
  }
}
Enter fullscreen mode Exit fullscreen mode

In our .tfvars file, we already have a variable called environment that will hold the name of the environment it is for.

Now, if the current environment and the environment of the .tfvars file don't match, it will generate an error similar to this:

Workspace & Variable File Inconsistency Error

Conclusion and some tips

Now, this is it: the environment is set. This is a great approach if the architecture for your project is quite similar for all environments. Otherwise, separate projects for all environments with module resources may be better suited.
Some common errors you may face:

  1. If you see an Error: No valid credential sources found error at initiation, most probably you don't have any default profile set, so run the export environment command again, as mentioned in Step 2.
  2. If your credential has a session key, it will expire after a period, and in that case, you have to update your AWS account credentials.

Top comments (0)