Imagine that you started a small project in terraform some time ago. And now your applications have hundreds of customers, infrastructure with a few environments, but still the same terraform monolith responsible for all your cloud infra. Sounds familiar? Then keep reading and you will know how to break down terraform monolith into multiple environments without resource recreation.
Audience
I assume you already have experience with terraform. You manage some of the cloud infrastructure with terraform, you are familiar with state files and backend configuration.
Theory
Let's quickly recap what we know about terraform state. Terraform keeps the information about its resources in a state file. Terraform has multiple options to store the state file, the most popular one is probably in the cloud object storage. So we'll be using S3 for this as an example.
If we want to manage environments separately, we need to have a state per environment. So that we can apply configuration independently. This is not a big deal to create one more state. The challenging task is to move information about cloud resources from one state to another. And to do it without downtime or without the need to recreate resources.
In the beginning, we have one directory with terraform code for both environments. Our goal is to split up the code, then move resource records from one state to another.
Now let's agree on the file system structure. For the target state we'll be using two separate directories. This pattern is very well explained here
If you want to know more about terraform best practices, I would strongly suggest taking a look at Terraform best practices guide by Anton Babenko
Migration in detail
As soon as you allocate a new directory for your new environment, you can start moving the related code. Simply cut the code from one main.tf
to another. Do not apply any changes at this stage.
Next, you'll need to manipulate a bit with terraform tool. Here is what needs to be done
- Init a new state for the new environment
- Pull the state file from the remote backend to your local directory
- Move respective resources from one state to another
- Finally, push the updated state for the remote backend
Short example
If you are power user of terraform, then this short example is for you
cd dev
# now set up remote backend for this new env
terraform init
terraform state pull > new.tfstate
# now open your favourite text editor
# and move resource definitions from prod tf files to dev tf files
cd ../prod
terraform state list > resource_list.txt
# now open resource_list.txt
# and leave there only the resource that you want to move
for i in $(cat resource_list.txt); do
terraform state mv -state-out=../dev/new.tfstate ${i} ${i}
done
cd ../dev
terraform state pull > new.tfstate
Detailed example
Existing infrastructure
We will consider a simple example with two VPC networks.
# prod/main.tf
resource "aws_vpc" "prod" {
cidr_block = "10.0.0.0/24"
}
resource "aws_vpc" "dev" {
cidr_block = "10.1.1.0/24"
}
The remote backend config looks like this
# prod/provider.tf
terraform {
required_version = "~> 1.0.0"
backend "s3" {
region = "eu-central-1"
bucket = "myterraformstatemonolith"
key = "prod"
}
}
Initialising a new environment
$ mkdir dev
$ cd dev
Setting up the remote backend. Pay attention that the key changed, you can also set up a new bucket if you want. But right now it is enough for our needs, we have a separate state file.
# dev/provider.tf
terraform {
required_version = "~> 1.0.0"
backend "s3" {
region = "eu-central-1"
bucket = "myterraformstatemonolith"
key = "dev"
}
}
Now go ahead and run terraform init
, this will create a file in the S3 bucket.
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
<output omitted>
Move resource definitions
Now you need to move the content of the terraform files related to the new environment. In my case my new dev/main.tf
will look like this
resource "aws_vpc" "dev" {
cidr_block = "10.1.1.0/24"
}
Move resources from the state file to a new one
This is a bit more complex than the previous steps. Firstly let's pull the new remote state
$ cd dev
$ terraform state pull > new.tfstate
Then get back to prod
directory and move the resources
$ cd prod
$ terraform state list
aws_vpc.dev
aws_vpc.prod
$ terraform state mv -state-out=../dev/new.tfstate aws_vpc.dev aws_vpc.dev
Move "aws_vpc.dev" to "aws_vpc.dev"
Successfully moved 1 object(s).
Push the updated state file back to S3 bucket
$ terraform state push new.tfstate
That's it, now make sure everything is up to date
$ terraform plan
aws_vpc.dev: Refreshing state... [id=vpc-070562866e6399b45a]
No changes. Your infrastructure matches the configuration.
Top comments (0)