In our previous article the AWS EKS series, i mentioned that we will later have an article about managing terraform state. Managing Terraform state correctly is one of the most important skills for anyone using Terraform in real-world AWS environments.
In this article, we'll explore how to store your Terraform state in an Amazon S3 remote backend, why this matters, essential best practices (including state locking), and a hands-on example you can follow.
This guide uses official Terraform documentation as the source of truth.
Why Terraform State Matters
Terraform uses a state file (terraform.tfstate) to track what infrastructure it manages. This file contains important mappings between your configuration and real resources — so Terraform can make safe and predictable changes in future runs.
By default, Terraform stores state locally, which works for experimentation but is unsafe for collaboration, automation, and production use.
Why Use a Remote Backend like S3?
Using a remote backend like Amazon S3 provides:
- Centralized state accessible by all team members
- Collaboration support
- Encryption and access control via IAM
- Versioning for recovery
- Support for state locking to avoid conflicts
Terraform's official documentation confirms that S3 is a supported backend type that can store state and support locking — and it's one of the most popular and reliable backends used in AWS environments.
State Locking — Preventing Conflicts
Terraform needs to prevent two people from writing changes to the same state file at the same time — otherwise state could become corrupted.
Modern Locking (S3 Native)
Terraform now supports native state locking in S3 through a lockfile mechanism. This means Terraform will create a .tflock file in the bucket alongside your state file, preventing concurrent runs.
This is done using the optional argument use_lockfile in your backend config.
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket2z"
key = "envs/prod/terraform.tfstate"
region = "us-east-1"
use_lockfile = true
}
}
-
use_lockfile = trueenables S3-native locking - S3 creates a
.tflockfile during a Terraform run - Other processes must wait until the lock is released
This removes the need for a separate DynamoDB table for state locking — an approach that is officially deprecated and will be removed in future Terraform releases.
Note: DynamoDB locking is still supported for backward compatibility, but Terraform now prefers S3 native locking for new configurations.
Prerequisites
Before we begin, make sure you have:
- AWS CLI installed and configured
aws configure
- Terraform installed
terraform version
- Permissions to create S3 buckets and apply Terraform changes
Step 1 — Create an S3 Bucket for Terraform State
Terraform cannot create the backend bucket for you, so you must create it first.
aws s3api create-bucket \
--bucket my-terraform-state-bucket2z \
--region us-east-1
Replace
my-terraform-state-bucket2zwith a globally unique name.
Step 2 — Enable Bucket Versioning (Best Practice)
Versioning helps you recover from accidental deletions or corruption.
aws s3api put-bucket-versioning \
--bucket my-terraform-state-bucket2z \
--versioning-configuration Status=Enabled
Step 3 — Enable Encryption on the Bucket
Encrypting state at rest is essential for security:
aws s3api put-bucket-encryption \
--bucket my-terraform-state-bucket2z \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
Troubleshooting: AccessDenied Error
If you encounter an AccessDenied error when running the encryption command:
An error occurred (AccessDenied) when calling the PutBucketEncryption operation: Access Denied
Common causes and fixes:
-
Insufficient IAM Permissions — Your IAM user/role needs
s3:PutEncryptionConfigurationpermission - Bucket doesn't exist — Verify the bucket name is correct:
aws s3api head-bucket --bucket my-terraform-state-bucket2z
- Wrong AWS credentials — Verify your identity:
aws sts get-caller-identity
Step 4 — Configure Terraform Backend
In your Terraform directory, define the backend like this:
terraform {
backend "s3" {
bucket = "my-terraform-state-bucket2z"
key = "demo/terraform.tfstate"
region = "us-east-1"
use_lockfile = true
}
}
| Option | Description |
|---|---|
bucket |
The S3 bucket you created |
key |
Path to store the state file |
use_lockfile |
Enables native S3 state locking |
Demo: Using S3 Remote State
1. Create a Terraform Project
mkdir terraform-s3-state-demo
cd terraform-s3-state-demo
2. Add Terraform Code
Create a file main.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
backend "s3" {
bucket = "my-terraform-state-bucket2z"
key = "demo/terraform.tfstate"
region = "us-east-1"
encrypt = true
use_lockfile = true
}
}
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "example" {
bucket = "tf-s3-backend-demo-example"
}
3. Initialize the Backend
terraform init
Terraform will initialize and automatically configure the backend.
4. Apply the Changes
terraform apply
Your state file is now stored in S3, and state locking is enabled.
Useful Terraform Backend Commands
Here are some helpful Terraform commands for managing remote state:
Migrate Local State to Remote
terraform init -migrate-state
Reconfigure Backend
terraform init -reconfigure
List State Resources
terraform state list
Inspect a Resource
terraform state show aws_s3_bucket.example
Additional Best Practices
Store State Remotely
Use S3 or Terraform Cloud — never store state locally in team environments.
Don't Edit or Delete State Manually
Manual changes corrupt state.
Isolate State by Environment
Use different keys for different environments:
envs/dev/terraform.tfstate
envs/staging/terraform.tfstate
envs/prod/terraform.tfstate
Regular Backups
Enable S3 versioning so you can roll back state accidentally overwritten or deleted.
Lock State on All Writes
Enable use_lockfile = true. It ensures only one Terraform process can modify state at a time, protecting against conflicts and corruption.
Cleanup: Removing Demo Resources Safely
After completing the demo, it's important to clean up resources to avoid unnecessary AWS costs and to keep your environment tidy.
Important: Always destroy Terraform-managed infrastructure before deleting the remote state backend.
Step 1: Destroy Terraform Resources
From your Terraform project directory, run:
terraform destroy
This command will:
- Read the state file from S3
- Identify all managed resources
- Safely delete them from AWS
Confirm when prompted.
At this point:
- All demo infrastructure is removed
- The Terraform state file still exists in S3 (this is expected)
Step 2: Verify State File in S3 (Optional)
You can confirm the state file still exists:
aws s3 ls s3://my-terraform-state-bucket2z/demo/
Terraform keeps the state file so you have:
- A record of managed infrastructure
- Historical versions (if versioning is enabled)
Step 3: Delete the Terraform State File (Optional)
If this was only a demo and you no longer need the state file, you may delete it after destroying resources.
aws s3 rm s3://my-terraform-state-bucket2z/demo/terraform.tfstate
Warning: Never delete a state file while infrastructure still exists — this will orphan resources.
Step 4: (Optional) Disable or Remove Lock Files
If you used S3 native locking, Terraform automatically removes the .tflock file after operations complete.
You normally do not need to manually remove lock files.
Step 5: Delete the S3 Backend Bucket (Optional)
If the S3 bucket was created only for this demo, and you no longer need it:
Empty the Bucket:
aws s3 rm s3://my-terraform-state-bucket2z --recursive
Delete the Bucket:
aws s3api delete-bucket \
--bucket my-terraform-state-bucket2z \
--region us-east-1
Cleanup Best Practices
- Always run
terraform destroyfirst - Never delete state files before destroying resources
- Keep state buckets for long-term projects
- Enable versioning for recovery
- Restrict access using IAM
Summary
Terraform state is foundational to infrastructure automation.
Storing it in a remote S3 backend with native state locking enabled:
- Ensures collaboration
- Prevents concurrent modifications
- Allows controlled recovery
- Supports CI/CD workflows
S3 remote state with native locking is now a best practice recommended by Terraform — and removes the need for a DynamoDB lock table in most cases.
References
Final Thoughts
Terraform state cleanup is just as important as provisioning.
A safe cleanup process ensures:
- No orphaned resources
- No unexpected AWS charges
- Clean environments for future work
When using remote state: Terraform is always the source of truth — let it manage the lifecycle from start to finish.


Top comments (0)