DEV Community


Posted on


Protecting your Terraform State

When applying a Terraform template, the state of the infrastructure being deployed is stored in a local terraform.tfstate file. If you're the sole developer on a project, having this stored in your project folder or a repo may work for you; in a team, you want to ensure that the latest version of your Terraform state is stored in a central place and source control isn't the best solution for this.

You don't want to get into a situation where multiple developers are working on the same cloud infrastructure stack and are stepping on each other's toes. You need a solution which prevents corruption of a stack, keeps revision history and removes sensitive files from your project structure (which could be on multiple desktops).

So where should I store it?

Saving Terraform State in S3

Terraform allows you to store this state in an S3 bucket by using a backend resource πŸŽ‰

1. Creating an S3 bucket

In AWS, you need to create an S3 bucket.

  • Navigate to the S3 section in AWS and click "Create Bucket".

  • Enter a name and select the region you want the bucket to be setup in, then click "Next".

  • Ensure you click the checkbox "Versioning (Keep all versions of an object in the same bucket.)", then click "Next".

  • Important: Ensure "Block Public Access" is selected, it should be by default, then click "Next".

  • Review your configuration and if you're happy, click "Create Bucket".

2. Integrating with your S3 Bucket

Now you have created your S3 bucket, you're ready to create a Backend resource, which allows Terraform to store and read the state from S3.

  • In the directory where your Terraform files live, you will need to create a new file which you can name whatever you like, for this tutorial, we will name it backend.terraform

  • In here, you will need to have the following structure:

terraform {
    backend "s3" {
        bucket = "{your-bucket-name}"
        encrypt = true
        key = "path/to/state/state.tfstate" # Where you want to store state in S3
        region = "{your-bucket-region}"
  • Once you have this file in your terraform directory, you are now prepared to store and read your state from S3; running a command like terraform apply will trigger this. You will be able to see that the file is now stored in the key directory specified in the S3 bucket.

  • You can confirm this has worked by navigating to the .terraform/terraform.tfstate in your terraform directory and see which backend is being used.

Prevent Concurrent Deployments

Now that your Terraform state is stored in an S3 bucket, how do we prevent multiple people applying a stack at the same time? 🀯

We will keep a file lock in a DynamoDB table 😁

1. Creating a DynamoDB table

In AWS, you need to create a DynamoDB table.

  • Navigate to the DynamoDB section in AWS and click "Create Table".

  • Give a name to your DynamoDB table and set the Primary Key / Partition Key as LockID.

  • The rest of the settings can be left as default. Click "Create Bucket".

2. Integrating with your DynamoDB table

Now you have created your DynamoDB table, you're ready to integrate your backend with this to ensure your Terraform state is locked when applying changes.

  • Modify your backend (the file we created earlier, named backend.terraform in our case) and add a new key:
terraform {
    backend "s3" {
        bucket = "{your-bucket-name}"
        dynamodb_table = "{your-dynamodb-table-name}" # This is the new key
        encrypt = true
        key = "path/to/state/state.tfstate"
        region = "{your-bucket-region}"
  • That's it, state locking is now working in your terraform stack. You can test this by running terraform plan and while it's running, navigating to your DynamoDB table in AWS and in the items section, click "Search" and there should be a LockID in there; this will disappear once the plan process has completed.

Voila, done! πŸ’₯

There is a lot of material out there on how to set this up but it is something I enjoyed doing, in a short amount of time, while wanting to start blogging, so here is my first on even though it's not the first of this type of article!

Permissions: You will need to ensure that the AWS profile you are using has the correct permissions to access the S3 bucket and perform the correct operations against the DynamoDB table. These can be found here:

Top comments (0)