Every day of this #30DaysOfAwsTerraform challenge feels like peeling back another layer of how real infrastructure works. And honestly? The deeper I go, the more I realize how much engineering simply involves understanding where things live, when to move them and how to protect them.
Today was one of those “oh… this is serious” days.
Day 04 was all about understanding Terraform state; what it is, why it matters, and why storing it locally is basically an invitation for chaos.
My goal today was simple: move my state to a secure, "production-style" Terraform remote backend on AWS.
I started today thinking, “State file? Okay cool, it’s probably one of those files Terraform needs somewhere.”
Then I learned it contains sensitive configurations, IDs, metadata, resource secrets and the entire known state of my infrastructure. Basically, everything Terraform uses to decide what to create, update, or destroy.
That was the moment things got real.
Terraform literally compares your real infrastructure to your desired config every time you run plan, apply, or destroy.
If that state file gets corrupted or overwritten by someone else?
- Terraform will make wrong assumptions.
- Wrong assumptions = scary outcomes.
So keeping it locally felt… irresponsible. Especially if you work in a team of engineers.
It’s not like an .env file you just pass around, this state file should never be edited manually and must always be protected.
So how did I add a remote backend to Terraform?
A sample of this configuration.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
}
backend "s3" {
bucket = "terraform-state-1764317914"
key = "dev/terraform.tfstate"
region = "eu-west-3"
use_lockfile = true
encrypt = true
profile = "<your aws profile for credentials>"
}
}
Did you notice, the use_lockfile field. That field tells Terraform:
“Before you touch anything… lock the state.” This prevents anyone else from running updates until the process is finished — avoiding concurrent modifications.
The bucket (terraform-state-1764317914, always use a unique name for your buckets) needs to exist before Terraform initializes. I didn’t want to click through the console anymore lol, so I used a shell script to create it with secure configurations.
A sample of this shell script is below:
#!/bin/bash
BUCKET_NAME="terraform-state-$(date +%s)"
REGION="eu-west-3"
PROFILE="<your aws profile for credentials>"
aws s3 mb s3://$BUCKET_NAME --region $REGION --profile $PROFILE
aws s3api put-bucket-versioning --profile $PROFILE \
--bucket $BUCKET_NAME \
--versioning-configuration Status=Enabled
aws s3api put-bucket-encryption --profile $PROFILE \
--bucket $BUCKET_NAME \
--server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}
]
}'
echo "======================================"
echo "S3 Backend Setup Complete!"
Versioning + encryption are essential for state locking and security, so I made sure those were enabled immediately.
The biggest thing I walked way with today is the purpose of a remote state:
- It keeps your infrastructure data safe
- It prevents concurrent updates, if configured correctly.
- It allows collaboration
- And with s3 versioning, every change to the state file gets saved as a backup automatically
I also finally understand how terraform actually thinks... It's always reconciling my current infrastructure with the desired config. The state file is that source of truth.
Getting better at terraform feels really good, I getting really close to setting up a good production ready Infrastructure with code. Next, I'll be learning about Terraform Variables.
I’m still curious about:
deeper state management workflows
workspace organization
and how teams structure their Terraform environments.
On to the day05.
Join in the challenge to learn terraform in 30days. Or at least try.
Top comments (1)
Thank you for reading