DEV Community

frankfolabi
frankfolabi

Posted on

Managing Terraform State: Best Practices for DevOps

How does Terraform knows which resources it manages? It uses a JSON format that records a mapping of the resources in the configuration files and the representation in those resources in the real world. This is known as Terraform state file. This is usually named terraform.tfstate. For personal project, the state file can live on the local machine called the local backend. However, when working with a team, having the state files on numerous local machines can be a disaster, neither would it be great to have it in a version control system due to some sensitive secrets that the state file contains. What are some best practices for managing state in Terraform?

Storage of State Files

The best practice for storing state files is to use remote backends. Amazon S3, Azure Storage, Google Cloud Storage, Terraform Cloud and Terraform Enterprise all support remote backends. These storage options help to avoid manual error, support locking so that no two persons would run terraform apply at the same time and protect secrets that may be contained in state files.

How to create a remote backend on S3

You can create an S3 bucket to store your state files. S3 buckets support versioning, encryption and security from public access. This makes the state file safe. You can also use DynamoDB to achieve state locking. Here is a Terraform configuration that can be deployed from another directory.

provider "aws" {
    region = "us-east-2"
}
// Create a unique bucket
resource "aws_s3_bucket" "tf_state" {
  bucket ="<your-bucket-name>"

  # Prevent accidental deletion of the S3 bucket
  lifecycle {
    prevent_destroy = true
  }
}

// Enable bucket versioning
resource "aws_s3_bucket_versioning" "enables" {
  bucket = aws_s3_bucket.tf_state.id
  versioning_configuration {
    status = "Enabled"
  }
}

// Enable S3 server side encryption 
resource "aws_s3_bucket_server_side_encryption_configuration" "default" {
  bucket = aws_s3_bucket.tf_state.id

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

// Block public access
resource "aws_s3_bucket_public_access_block" "public_access" {
  bucket = aws_s3_bucket.tf_state.id
  block_public_acls = true
  block_public_policy = true
  ignore_public_acls = true
  restrict_public_buckets = true
}

// Create DynamoDB for locking
resource "aws_dynamodb_table" "terraform_locks" {
  name = "<your-table-name>"
  billing_mode = "PAY_PER_REQUEST"
  hash_key = "LockID"

  attribute {
    name = "LockID"
    type = "S"

  }
}
Enter fullscreen mode Exit fullscreen mode

You can now run the following Terraform commands in order

  1. terraform init
  2. terraform plan
  3. terraform apply

An S3 bucket along with a DynamoDB table would be created. Next, configure the backend to store the state files remotely in the S3 bucket just created. This is the Terraform block to set the backend.

terraform {
  backend "s3" {
    bucket = "<your-bucket-name>"
    key = "global/s3/terraform.tfstate"
    region = "us-east-2"

    dynamodb_table = "<your-table-name>"
    encrypt = true
  }
}
Enter fullscreen mode Exit fullscreen mode

Now you will need to run terraform init again which will allow Terraform to configure the remote backend and copy the state file to the specified S3 bucket.

configuring the backend

After a confirmation of yes, the state file is no longer on a local machine but stored remotely. This will allow teams to collaborate efficiently without wreaking havoc on existing infrastructure. This is the best practice. However, how can environments be isolated? A future blog would provide a solution.

Top comments (0)