Agentic Infrastructure: Building a Live AWS Deployment Pipeline with Claude Code
Phases 1 through 3 built the foundation: a verified environment, a project-aware agent, and four reusable Skills. Phase 4 is where that foundation was used — running the complete deployment pipeline from an empty Terraform directory to a live static website on AWS CloudFront.
This post documents every step, every command, and every output.
The Pipeline Sequence
| Step | Command | Type | Output |
|---|---|---|---|
| 1 | /scaffold-terraform |
Skill | 4 Terraform files generated |
| 2 | terraform init |
Manual | Providers downloaded |
| 3 | /tf-plan |
Skill | Plan: 4 create, 0 destroy |
| 4 | /tf-apply |
Skill | 4 AWS resources provisioned |
| 5 | /deploy |
Skill | Site live on CloudFront |
Step 1: /scaffold-terraform
Claude read template-spec.md and generated the complete Terraform configuration in the terraform/ directory.
Files generated:
terraform/
├── main.tf
├── variables.tf
├── outputs.tf
└── providers.tf
What main.tf contained:
# S3 bucket — versioning enabled, public access blocked
resource "aws_s3_bucket" "site" {
bucket = var.bucket_name
tags = var.tags
}
resource "aws_s3_bucket_versioning" "site" {
bucket = aws_s3_bucket.site.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_public_access_block" "site" {
bucket = aws_s3_bucket.site.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# CloudFront origin access control
resource "aws_cloudfront_origin_access_control" "oac" {
name = "${var.bucket_name}-oac"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
# CloudFront distribution
resource "aws_cloudfront_distribution" "site" {
enabled = true
default_root_object = "index.html"
price_class = "PriceClass_200" # Africa + Europe coverage
origin {
domain_name = aws_s3_bucket.site.bucket_regional_domain_name
origin_id = "S3Origin"
origin_access_control_id = aws_cloudfront_origin_access_control.oac.id
}
default_cache_behavior {
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3Origin"
forwarded_values {
query_string = false
cookies { forward = "none" }
}
}
restrictions {
geo_restriction { restriction_type = "none" }
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
# Bucket policy — CloudFront access only
resource "aws_s3_bucket_policy" "site_policy" {
bucket = aws_s3_bucket.site.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "cloudfront.amazonaws.com" }
Action = "s3:GetObject"
Resource = "${aws_s3_bucket.site.arn}/*"
Condition = {
StringEquals = {
"AWS:SourceArn" = aws_cloudfront_distribution.site.arn
}
}
}]
})
}
providers.tf:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "af-south-1"
}
Step 2: terraform init (Manual)
cd terraform/
terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 5.0"...
- Installing hashicorp/aws v5.x.x...
- Installed hashicorp/aws v5.x.x (signed by HashiCorp)
Terraform has been successfully initialized!
terraform init is intentionally not automated inside a Skill. It downloads provider plugins from the internet and sets up the Terraform backend. These are decisions worth confirming manually — particularly the provider version being installed.
Step 3: /tf-plan
The Skill ran terraform validate, then terraform plan -out=tfplan.binary, then scanned the output for destructions.
Plan output summary:
| Change | Count | Resources |
|---|---|---|
| Create | 4 |
aws_s3_bucket, aws_cloudfront_distribution, aws_cloudfront_origin_access_control, aws_s3_bucket_policy
|
| Modify | 0 | — |
| Destroy | 0 | — |
Zero destructions. The Skill confirmed the plan was safe and returned the summary for review before proceeding.
Step 4: /tf-apply
With the plan reviewed and confirmed, the Skill ran:
terraform apply tfplan.binary
All four resources provisioned successfully in af-south-1:
aws_cloudfront_origin_access_control.oac: Creating...
aws_s3_bucket.site: Creating...
aws_s3_bucket.site: Creation complete
aws_s3_bucket_versioning.site: Creating...
aws_s3_bucket_public_access_block.site: Creating...
aws_cloudfront_origin_access_control.oac: Creation complete
aws_cloudfront_distribution.site: Creating...
aws_cloudfront_distribution.site: Still creating... [10m elapsed]
aws_cloudfront_distribution.site: Creation complete
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
CloudFront propagation note: the distribution takes 8-12 minutes to propagate globally after apply completes. Status shows "InProgress" during propagation and "Deployed" when complete. The site is only accessible after status is "Deployed."
Step 5: /deploy
The Skill read the Terraform outputs for bucket name and distribution ID, then ran:
# Sync site files
aws s3 sync ./site s3://<bucket-name>/ --delete
# Trigger CloudFront cache invalidation
aws cloudfront create-invalidation \
--distribution-id <dist-id> \
--paths '/*'
Output:
upload: site/index.html to s3://<bucket-name>/index.html
upload: site/styles.css to s3://<bucket-name>/styles.css
{
"Location": "...",
"Invalidation": {
"Id": "...",
"Status": "InProgress"
}
}
Site confirmed live at the CloudFront URL.
Full Verification Checklist
| Check | Result |
|---|---|
Terraform files generated in terraform/
|
Passed |
terraform validate — no errors |
Passed |
| Plan: 4 to create, 0 to destroy | Passed |
| S3 bucket created in af-south-1 | Passed |
| CloudFront distribution status: Deployed | Passed |
Site files synced via aws s3 sync
|
Passed |
| CloudFront invalidation triggered | Passed |
| Site accessible via CloudFront URL in browser | Passed |
What Made the Pipeline Work
The deployment itself was the least stressful part of this project. That is because the three preceding phases did the real work:
- Phase 1 verified the environment — no ambiguous tool errors during deployment
- Phase 2 loaded project memory — the agent knew the architecture, region, and conventions without prompting
- Phase 3 defined the Skills — each step followed the same procedure, with the same checks, as designed
A well-structured pipeline does not handle problems well. It makes certain categories of problems impossible.
Live site: https://d305l937o434yr.cloudfront.net/
Top comments (0)