Day 25 of my 30-Day Terraform Challenge was a practical build: deploy a static website on AWS using Terraform.
The goal was to apply the habits from the previous days in one small project:
- reusable modules
- environment separation
- remote state
- clean variables
- consistent tagging
- reviewed plans
- safe cleanup
GitHub reference:
Day 25 code
What I Built
I built a static website stack with:
- an S3 bucket
- S3 static website hosting
- uploaded
index.htmlanderror.html - bucket policy for public website reads
- reusable Terraform module
- dev environment configuration
- remote backend with S3 and DynamoDB
- optional CloudFront support
The website was verified through the S3 website endpoint:
http://mary-mutua-day25-static-website-dev-718417034043.s3-website-us-east-1.amazonaws.com
Note: I destroyed the resources after verification to avoid AWS charges.
Project Structure
I used this structure:
day_25/day25-static-website/
├── bootstrap/
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── envs/
│ └── dev/
│ ├── backend.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── provider.tf
│ ├── terraform.tfvars
│ └── variables.tf
└── modules/
└── s3-static-website/
├── main.tf
├── outputs.tf
└── variables.tf
The module lives in:
modules/s3-static-website
The dev environment calls that module from:
envs/dev
This separation matters because the module should be reusable, while the environment folder should contain environment-specific values.
Why I Used a Module
I could have put everything in one main.tf, but that would not scale well.
A module lets me define the static website once and reuse it later for:
- dev
- staging
- production
- another website
- another AWS account
The module accepts inputs like:
bucket_name
environment
index_document
error_document
tags
enable_cloudfront
Then the dev environment passes values into it:
module "static_website" {
source = "../../modules/s3-static-website"
bucket_name = var.bucket_name
environment = var.environment
index_document = var.index_document
error_document = var.error_document
enable_cloudfront = var.enable_cloudfront
tags = {
Owner = "terraform-challenge"
Day = "25"
}
}
That is the DRY principle in practice: define the infrastructure pattern once, then reuse it with different inputs.
The S3 Website Module
The module creates an S3 bucket:
resource "aws_s3_bucket" "website" {
bucket = var.bucket_name
force_destroy = var.environment != "production"
tags = local.common_tags
}
The force_destroy setting is useful for dev because it allows Terraform to delete the bucket even when it contains uploaded objects.
But I would not want that behavior in production, so the condition protects production:
force_destroy = var.environment != "production"
The module also enables static website hosting:
resource "aws_s3_bucket_website_configuration" "website" {
bucket = aws_s3_bucket.website.id
index_document {
suffix = var.index_document
}
error_document {
key = var.error_document
}
}
Then Terraform uploads the HTML files:
resource "aws_s3_object" "index" {
bucket = aws_s3_bucket.website.id
key = "index.html"
content_type = "text/html"
content = <<-HTML
<!DOCTYPE html>
<html>
<head><title>Terraform Static Website</title></head>
<body>
<h1>Deployed with Terraform</h1>
<p>Environment: ${var.environment}</p>
<p>Bucket: ${var.bucket_name}</p>
</body>
</html>
HTML
}
Remote State
Before deploying the website, I created a remote backend using a bootstrap folder.
That created:
- an S3 bucket for Terraform state
- a DynamoDB table for state locking
- S3 encryption
- S3 versioning
Remote state matters because Terraform state is how Terraform tracks real infrastructure. Keeping it locally is risky when working with teams or across machines.
The backend protects the workflow by:
- storing state remotely
- preventing concurrent changes with locking
- keeping state versions for recovery
- avoiding local-only state files
One important cleanup lesson: if your state bucket has versioning enabled, deleting the bucket later requires deleting all object versions and delete markers first.
That was a useful real-world reminder.
CloudFront Configuration
The module includes optional CloudFront support:
resource "aws_cloudfront_distribution" "website" {
count = var.enable_cloudfront ? 1 : 0
enabled = true
default_root_object = var.index_document
price_class = "PriceClass_100"
origin {
domain_name = aws_s3_bucket_website_configuration.website.website_endpoint
origin_id = "s3-website"
custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "http-only"
origin_ssl_protocols = ["TLSv1.2"]
}
}
}
For this lab, CloudFront creation was blocked by AWS account verification.
Terraform and the AWS Console both returned the same issue:
Your account must be verified before you can add new CloudFront resources.
So I disabled CloudFront in dev:
enable_cloudfront = false
The module is still CloudFront-ready, but the working deployment used the S3 website endpoint.
Deployment Commands
From the bootstrap folder, I created the remote backend:
terraform init
terraform validate
terraform plan -out=bootstrap.tfplan
terraform apply bootstrap.tfplan
Then I initialized the dev environment:
terraform -chdir=day_25/day25-static-website/envs/dev init -reconfigure
terraform -chdir=day_25/day25-static-website/envs/dev validate
terraform -chdir=day_25/day25-static-website/envs/dev plan -out=day25.tfplan
terraform -chdir=day_25/day25-static-website/envs/dev apply day25.tfplan
The plan showed:
Plan: 7 to add, 0 to change, 0 to destroy.
After apply, I checked the output:
terraform -chdir=day_25/day25-static-website/envs/dev output -raw website_endpoint
Then I opened the S3 website endpoint in the browser and confirmed the site loaded.
Cleanup
Because this was a lab, cleanup mattered.
I destroyed the website stack first:
terraform -chdir=day_25/day25-static-website/envs/dev plan -destroy -out=day25-destroy.tfplan
terraform -chdir=day_25/day25-static-website/envs/dev apply day25-destroy.tfplan
Then I cleaned up the bootstrap backend after deleting versioned objects from the state bucket.
That final cleanup was a good reminder: production-grade safety features like S3 versioning are excellent, but they also change how cleanup works.
Key Takeaways
Day 25 was not just about S3.
The bigger lessons were:
- reusable modules make infrastructure easier to scale
- environment folders keep dev/staging/prod cleanly separated
- remote state protects collaboration and recovery
- saved plans make changes reviewable
- cleanup is part of the workflow
- real cloud provider account limits can block otherwise-correct Terraform
This was a small project, but it pulled together many best practices from the challenge so far.
Final Thought
A static website may look simple, but deploying it properly with Terraform teaches important infrastructure habits.
The goal is not just to make a page load.
The goal is to make the deployment repeatable, reviewable, reusable, and safe to clean up.
Full Code
GitHub reference:
👉 GitHub Link
Follow My Journey
This is Day 25 of my 30-Day Terraform Challenge.
See you on Day 26.
Top comments (0)