Terragrunt is a thin wrapper around Terraform that provides extra layer to handle Terraform configurations. It makes it easier to manage .tf - remote states.
In this blog post I'm focusing on using Terragrunt in the context of multi-account provisioning. Call it AWS account bootstrapping or landing zone, idea is the same: provision same identical resource to multiple AWS accounts and regions. I want to make it with least amount of copy-pasting and as dynamic as possible.
I'm going to show how to use Terragrunt to:
- Provision a resource to multiple AWS accounts.
- Manage all accounts' remote states in a single S3 - bucket.
I'm going to use following account setup for the sample:
- AWS management account with AWS Organizations - enabled. I'm running
terragrunt
using short time credentials from this account. Resources are provisioned to member accounts and Terraform states are stored to S3 and DynamoDB on this account. - Three member accounts. Note: AWS Organizations automatically creates default role(
OrganizationAccountAccessRole
) to every member account. Organizations default role is used to provision resources with Terraform. (Note: Default role used here is admin role. If you are considering to use this in production, make sure to use a role with scoped down access rights.)
Walkthrough
Prerequisites: Install terraform and terragrunt
To get started with multi-account deployment, I'm using very minimal terragrunt structure.
You can clone this project from: https://github.com/markymarkus/terragrunt_aws_multi_account
├── deployment # Terragrunt configuration files
│ ├── accounts
│ │ ├── sandbox1
│ │ │ ├── account.hcl
│ │ │ └── eu-west-1
│ │ │ ├── infra
│ │ │ │ └── terragrunt.hcl
│ │ │ └── region.hcl
│ │ ├── sandbox2
│ │ │ ├── account.hcl
│ │ │ └── eu-west-1
│ │ │ ├── infra
│ │ │ │ └── terragrunt.hcl
│ │ │ └── region.hcl
│ │ └── sandbox3
│ │ ├── account.hcl
│ │ ├── eu-north-1
│ │ │ ├── infra
│ │ │ │ └── terragrunt.hcl
│ │ │ └── region.hcl
│ │ └── eu-west-1
│ │ ├── infra
│ │ │ └── terragrunt.hcl
│ │ └── region.hcl
│ └── terragrunt.hcl
└── modules # Terraform module for S3 - bucket
├── main.tf
├── outputs.tf
├── s3.tf
└── vars.tf
Terragrunt
Configuration in /deployment
- folder defines which modules in /modules
- folder are deployed to which account and which region. My configuration creates S3 - bucket to eu-west-1
region in every sandbox - account. Sandbox3 gets additional bucket to eu-north-1
to show how this configuration can be extended to multiple regions.
Most of the magic happens in /deployment/terraform.hcl
.
generate
block injects Terraform provider configuration into account - modules. Variables defined in account.hcl and region.hcl configuration files are used to implement dynamic provider
block.
generate "provider" {
path = "provider.tf"
contents = <<EOF
provider "aws" {
region = "${local.aws_region}"
assume_role {
role_arn = "arn:aws:iam::${local.aws_account_id}:role/OrganizationAccountAccessRole"
}
Run
To provision S3 bucket from /modules - folder to all sandbox - accounts in my configuration, I'm doing following:
cd deployment/accounts
terragrunt run-all apply
- terragrunt automatically calls
terragrunt init
so it is not needed separately. - If Terraform state bucket and DynamoDB table(as defined in /deployment/terragrunt.hcl) do not exist, terragrunt creates those.
Output
I'm copy-pasting quite complete output from terragrunt
run here.
Each module(here account / region) is provisioned one by one. We are not executing one "BIG terragrunt plan" but instead set of separate terraform plans. By default these tf plans do not contain information about AWS account to which is being provisioned to. To make sure I understand correctly multi-account context here, I added account default tags to provisioned resources.
So, take a look:
➜ accounts git:(master) ✗ terragrunt run-all apply <aws:markus-sso-master> <region:eu-west-1>
INFO[0000] The stack at /terragrunt_aws_multi_account/deployment/accounts will be processed in the following order for command apply:
Group 1
- Module /terragrunt_aws_multi_account/deployment/accounts/sandbox1/eu-west-1/infra
- Module /terragrunt_aws_multi_account/deployment/accounts/sandbox2/eu-west-1/infra
- Module /terragrunt_aws_multi_account/deployment/accounts/sandbox3/eu-north-1/infra
- Module /terragrunt_aws_multi_account/deployment/accounts/sandbox3/eu-west-1/infra
Are you sure you want to run 'terragrunt apply' in each folder of the stack described above? (y/n) y
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.31.0
Terraform has been successfully initialized!
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v5.31.0
Terraform has been successfully initialized!
- Using previously-installed hashicorp/aws v5.31.0
Terraform has been successfully initialized!
- Using previously-installed hashicorp/aws v5.31.0
Terraform has been successfully initialized!
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.bucket will be created
+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = "sandbox3-dev-eu-north-1"
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = true
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = {
+ "account" = "333333333333"
+ "environment" = "dev"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ bucket_name = (known after apply)
aws_s3_bucket.bucket: Creating...
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.bucket will be created
+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = "sandbox2-dev-eu-west-1"
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = true
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = {
+ "account" = "222222222222"
+ "environment" = "dev"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ bucket_name = (known after apply)
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.bucket will be created
+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = "sandbox1-dev-eu-west-1"
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = true
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = {
+ "account" = "1111111111111"
+ "environment" = "dev"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ bucket_name = (known after apply)
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_s3_bucket.bucket will be created
+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
+ acl = (known after apply)
+ arn = (known after apply)
+ bucket = (known after apply)
+ bucket_domain_name = (known after apply)
+ bucket_prefix = "sandbox3-dev-eu-west-1"
+ bucket_regional_domain_name = (known after apply)
+ force_destroy = true
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ object_lock_enabled = (known after apply)
+ policy = (known after apply)
+ region = (known after apply)
+ request_payer = (known after apply)
+ tags_all = {
+ "account" = "333333333333"
+ "environment" = "dev"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ bucket_name = (known after apply)
aws_s3_bucket.bucket: Creation complete after 1s [id=sandbox3-dev-eu-north-120231228125652554400000001]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
bucket_name = "sandbox3-dev-eu-north-120231228125652554400000001"
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creating...
aws_s3_bucket.bucket: Creation complete after 3s [id=sandbox1-dev-eu-west-120231228125655120500000001]
aws_s3_bucket.bucket: Creation complete after 3s [id=sandbox2-dev-eu-west-120231228125655134100000001]
aws_s3_bucket.bucket: Creation complete after 3s [id=sandbox3-dev-eu-west-120231228125655260000000001]
Releasing state lock. This may take a few moments...
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
bucket_name = "sandbox1-dev-eu-west-120231228125655120500000001"
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
bucket_name = "sandbox2-dev-eu-west-120231228125655134100000001"
Releasing state lock. This may take a few moments...
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
bucket_name = "sandbox3-dev-eu-west-120231228125655260000000001"
➜ accounts git:(master) ✗
After terragrunt run-all
finishes, every sandbox account has S3 bucket in specified region.
There isn't any final extra output by terragrunt which would combine results on how many resources were created/updated in total.
Conclusion
Ok, that's all! I wanted to test Terragrunt and how it would perform on multi-account environment. Based on this trial, few key takeaways:
- (Lack of )concurrency. Terraform/Terragrunt combination handles modules(here AWS accounts) sequentially, one after another. For few accounts this would work but for hundreds of accounts you may want solution with parallel deployments(AWS Control Tower, ADF etc)
-
Unclear TF plan. I love terraform plan. It is very precise, very clear on the changes it is going to perform. Adding multi-account structure with Terragrunt is not by default clear on which AWS account it is working on. Also, running
terragrunt run-all destroy
just warns that it is going to run destroy on all/deployment/accounts/
- folder. Resource specific information is shown only after destroy - command has been approved and run.
I think terragrunt
is very useful tool for handling example separate dev / qa / prod environments in Terraform. But for multi-account management tasks it may turn out too lightweight.
I'm keeping my eyes on Hashicorp side and what is happening on Terraform stacks.
Top comments (0)