Today marks Day 3 of my #30daysofAWSTerraform challenge! 🚀 After setting up the provider yesterday, I provisioned my very first AWS resource: a Simple Storage Service (S3) bucket.
It might seem simple, but this exercise covered the entire Terraform Lifecycle—creation, modification, and destruction—proving that infrastructure is no longer static!
✅ Tasks Completed:
Resource Definition: Wrote the aws_s3_bucket block by referencing the official Terraform Registry documentation.
Full Lifecycle Execution: Ran the complete workflow: init → plan → apply (Creation) → destroy (Cleanup).
State Updates: Modified the bucket tags in the code and ran apply again to see how Terraform detects changes ("1 to change") without recreating the resource.
Automation: Used the -auto-approve flag to bypass the manual "yes" confirmation prompt (for lab environments only!).
📝 Notes:
Global Uniqueness: learned that S3 bucket names must be globally unique across all AWS accounts, not just mine.
Resource Syntax: resource "_" "" (e.g., resource "aws_s3_bucket" "my_bucket").
Idempotency: Terraform is smart enough to know that if I run apply twice without code changes, nothing happens.
🔗 Resources:
My Code & Progress: https://github.com/Gokulprasath-N/Terraform-Full-Course-Aws/tree/main/lessons/day04
Video I watched: https://www.youtube.com/watch?v=09HQ_R1P7Lw
Mentor: Piyush Sachdeva
I am excited to move on to Terraform State file management with AWS S3 tomorrow!
Top comments (0)