DEV Community

Cover image for Day 14 of #30DaysOfAWSTerraform: Hosting a Static Website with S3 and CloudFront
Zakariyau Mukhtar
Zakariyau Mukhtar

Posted on

Day 14 of #30DaysOfAWSTerraform: Hosting a Static Website with S3 and CloudFront

Day 14 was all about static websites and how AWS handles them at scale. Before this lesson, I had a surface-level idea of what a static website was, but today made everything click from how files are stored, to how they are securely served globally using AWS services.

The core focus of today’s lesson was S3, CloudFront, and how Terraform ties everything together as Infrastructure as Code.


Understanding Static Websites

A static website is exactly what it sounds like a site that serves fixed content such as HTML, CSS, JavaScript, images, and assets. There’s no backend logic, no database queries, and no server-side rendering. What you upload is what users see.

This makes static websites:

  • Fast
  • Cheap to host
  • Highly scalable
  • Secure when configured correctly

AWS excels at this through Amazon S3 and Amazon CloudFront.


Deep Dive into Amazon S3

S3 (Simple Storage Service) is not just a storage bucket it’s a highly durable object storage service designed to store and retrieve any amount of data.

Key things I deeply understood today:

  • S3 stores objects, not filesystems
  • Every object is accessed via a unique key
  • S3 is globally durable but regionally hosted
  • Public access must be explicitly controlled

In my setup, I created an S3 bucket using Terraform:

resource "aws_s3_bucket" "firstbucket" {
  bucket = var.bucket_name
}
Enter fullscreen mode Exit fullscreen mode

To prevent accidental public exposure, I added a public access block, which is critical for security:

resource "aws_s3_bucket_public_access_block" "block" {
  bucket = aws_s3_bucket.firstbucket.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}
Enter fullscreen mode Exit fullscreen mode

This ensures that the bucket cannot be accessed publicly unless explicitly allowed through CloudFront.


Uploading Website Files to S3

Instead of uploading files manually, Terraform can manage website content using aws_s3_object. This is powerful because your infrastructure and content stay in sync.

resource "aws_s3_object" "object" {
  for_each = fileset("${path.module}/www", "**/*")
  bucket   = aws_s3_bucket.firstbucket.id
  key      = each.value
  source   = "${path.module}/www/${each.value}"
  etag     = filemd5("${path.module}/www/${each.value}")

  content_type = lookup({
    "html" = "text/html",
    "css"  = "text/css",
    "js"   = "application/javascript",
    "json" = "application/json",
    "png"  = "image/png",
    "jpg"  = "image/jpeg",
    "svg"  = "image/svg+xml"
  }, split(".", each.value)[length(split(".", each.value)) - 1], "application/octet-stream")
}
Enter fullscreen mode Exit fullscreen mode

This automatically uploads HTML, CSS, and JavaScript from the www/ directory into S3 with correct content types.


Introducing CloudFront (The Game Changer)

CloudFront is AWS’s Content Delivery Network (CDN). Instead of serving content directly from S3, CloudFront caches content at edge locations close to users, dramatically improving performance and security.

I created an Origin Access Control (OAC) so CloudFront can securely access the S3 bucket:

resource "aws_cloudfront_origin_access_control" "origin_access_control" {
  name                              = "demo-oac"
  origin_access_control_origin_type = "s3"
  signing_behavior                  = "always"
  signing_protocol                  = "sigv4"
}
Enter fullscreen mode Exit fullscreen mode

Then I defined the CloudFront distribution:

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name              = aws_s3_bucket.firstbucket.bucket_regional_domain_name
    origin_access_control_id = aws_cloudfront_origin_access_control.origin_access_control.id
    origin_id                = local.origin_id
  }

  enabled             = true
  default_root_object = "index.html"

  default_cache_behavior {
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    target_origin_id       = local.origin_id
    viewer_protocol_policy = "redirect-to-https"
  }

  viewer_certificate {
    cloudfront_default_certificate = true
  }
}
Enter fullscreen mode Exit fullscreen mode

This configuration ensures:

  • HTTPS is enforced
  • Content is cached globally
  • S3 is never directly exposed to users

Locking Down S3 with Bucket Policy

To complete the security setup, I added a bucket policy allowing only CloudFront to access the S3 objects:

resource "aws_s3_bucket_policy" "allow_cf" {
  bucket = aws_s3_bucket.firstbucket.id

  policy = jsonencode({
    Statement = [{
      Effect    = "Allow"
      Principal = { Service = "cloudfront.amazonaws.com" }
      Action    = "s3:GetObject"
      Resource  = "${aws_s3_bucket.firstbucket.arn}/*"
    }]
  })
}
Enter fullscreen mode Exit fullscreen mode

This is a best practice for production-grade static websites.


Challenge Faced

I couldn’t fully complete the deployment because my AWS account wasn’t approved to create CloudFront distributions. That was frustrating, but also a real-world lesson: permissions and account limits matter.

Even so, the Terraform configuration is solid and production-ready.


Final Thoughts

Day 14 was a big milestone. I now understand:

  • How static websites work on AWS
  • Why S3 is ideal for hosting static content
  • How CloudFront improves performance and security
  • How Terraform ties everything together cleanly

This wasn’t just theory, it was real infrastructure thinking.

On to Day 15

Top comments (0)