Welcome to Day 3 of my 30 Days of AWS Terraform Challenge!
Today marks an important milestone — I wrote my first Terraform configuration that actually creates a real AWS resource, an S3 bucket.
This might seem simple on the surface, but the concepts learned today build the foundation for every cloud automation task we’ll do going forward.
Let’s break down the entire process in a beginner-friendly way.
Why Start with S3?
Amazon S3 is one of the simplest services to automate.
It doesn’t need VPC, networking, or complex dependencies — making it perfect for understanding:
- How Terraform resources are written
- How provider blocks work
- How to run Terraform commands
- How a state file tracks your AWS infrastructure If you’re learning Infrastructure as Code, this is the ideal first step.
Folder Setup
Inside my project, I created a new folder:
day03/
And inside it:main.tf
Terraform does not care about the filename; it only cares that the extension is .tf.
Writing the S3 Bucket Configuration
From the official Terraform documentation, the S3 resource block looks like this:
resource "aws_s3_bucket" "firstbucket" {
bucket = "my-demo-bucket-123"
tags = {
Name = "MyBucket"
Environment = "Dev"
}
}
What this means:
- aws_s3_bucket → Terraform resource type
- firstbucket → internal name used for referencing
- bucket → must be globally unique
- tags → key–value metadata Just like that, our infrastructure is defined as code.
Running the Terraform Workflow
Terraform has a very predictable 4-step workflow:
1. terraform init
This command downloads the AWS provider plugin and prepares your working directory.
terraform init
You run this whenever you create a new folder or add a new provider.
2. terraform plan
A dry-run that shows what changes Terraform will make.
terraform plan
For today’s code, it shows:
Plan: 1 to add, 0 to change, 0 to destroy.
Meaning Terraform will create 1 resource — our S3 bucket.
3. terraform apply
Time to actually create the bucket.
terraform apply
Terraform asks for confirmation:
Enter a value: yes
Or skip the prompt:
terraform apply -auto-approve
Within a few seconds, the new S3 bucket appears in the AWS console!
4. terraform destroy
To delete everything:
terraform destroy
Or:
terraform destroy -auto-approve
Terraform cleans up the bucket and returns your environment to original state.
This “build → modify → destroy” cycle is a huge part of real DevOps workflows.
How Terraform Detects Changes
One of the coolest things I learned today:
Terraform keeps track of all created resources using a local file called:
terraform.tfstate
If I update the tag in my code:
Name = "MyBucket 2.0"
And run:
terraform plan
Terraform compares:
- The desired state (my .tf file)
- The actual state (AWS resources) And says:
0 to add, 1 to change, 0 to destroy
This state-management capability is what makes Terraform so powerful.
Key Learnings From Day 3
- How to use official Terraform docs effectively
- Understanding resource blocks and provider blocks
- Running init, plan, apply, and destroy
- Importance of globally unique S3 bucket names
- How the Terraform state file tracks real AWS infrastructure
- How Terraform automatically identifies changes and updates resources
Final Thoughts
Today was the moment where Terraform “clicked” for me.
Seeing an actual AWS resource being created from a simple .tf file feels like unlocking a new superpower.
Terraform removes the manual clicking and turns infrastructure into repeatable, version-controlled automation — something every DevOps engineer must master.


Top comments (0)