Introduction
Day 3 of my Terraform journey is where things truly came together.
After setting up my environment, I focused on two foundational concepts from the labs:
- Provider block
- Resource block
These are the core building blocks of any Terraform configuration. By the end of the labs, I was able to define and create real AWS resources entirely through code.
Understanding the Provider Block
The provider block is how Terraform connects to a cloud platform like AWS.
provider "aws" {
region = "us-west-2"
}
This tells Terraform:
- Use AWS as the provider
- Deploy resources in the us-west-2 region
For authentication, I used:
aws configure
This stores my IAM user credentials locally, and Terraform automatically uses them to authenticate with AWS.
Understanding the Resource Block
The resource block is where actual infrastructure is defined.
In this lab, I created:
- An S3 bucket
- A security group
These resources represent real cloud infrastructure managed directly by Terraform.
Terraform Configuration
S3 Bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "mary-mutua-tf-lab-unique-2026"
tags = {
Name = "My S3 Bucket"
Purpose = "Terraform Lab"
}
}
This creates a storage bucket in AWS. One key requirement is that bucket names must be globally unique.
Security Group
resource "aws_security_group" "web_sg" {
name = "web_server_inbound"
description = "Allow inbound HTTPS traffic"
ingress {
description = "Allow HTTPS from the Internet"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web_server_inbound"
}
}
This defines network access rules, allowing inbound traffic on port 443.
Variables (Concept Learned)
In this lab, I learned that Terraform can use environment variables for configuration and authentication.
Although I primarily used credentials configured via:
aws configure
Terraform also supports environment variables such as:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
These are commonly used in automated environments, but for this setup, AWS CLI configuration was sufficient.
Terraform Workflow
To deploy the infrastructure, I used the standard Terraform workflow:
terraform init
terraform plan
terraform apply
- terraform init initializes the project and downloads required providers
- terraform plan previews the changes Terraform will make
- terraform apply creates the resources in AWS
For faster execution during the lab, I also used:
terraform apply -auto-approve
This skips the interactive confirmation step and applies the changes immediately.
Issues I Encountered (and How I Fixed Them)
1. S3 Bucket Name Already Exists
Error:
BucketAlreadyExists
Cause:
S3 bucket names must be globally unique across all AWS users.
Fix:
I updated the bucket name to a unique value.
2. Undeclared VPC Resource
Error:
Reference to undeclared resource aws_vpc.vpc
Cause:
I referenced a VPC that was not defined in my Terraform configuration.
Fix:
I removed the line:
vpc_id = aws_vpc.vpc.id
Terraform then automatically used the default VPC.
Cleaning Up Resources
After completing the lab, I cleaned up all resources:
terraform destroy
This is an important habit to avoid unnecessary cloud costs.
Key Takeaways
- The provider block connects Terraform to AWS
- The resource block defines actual infrastructure
- Terraform uses credentials from aws configure
- S3 bucket names must be globally unique
- Terraform enforces strict resource definitions
- Infrastructure can be created and destroyed using code
Resources and References
This lab was guided by the following hands-on materials:
These labs helped reinforce the concepts of provider configuration, resource creation, and Terraform workflow through practical examples.
Conclusion
Day 3 helped me understand how Terraform moves from configuration to real infrastructure.
By combining provider and resource blocks, I was able to define and manage AWS resources using code instead of manual setup. This is the foundation of Infrastructure as Code.
Follow My Journey
This is Day 3 of my 30-Day Terraform Challenge.
See you on Day 4 🚀
Top comments (0)