Day 22 of my Terraform journey felt like a checkpoint.
After three weeks of building infrastructure, testing modules, writing workflows, using GitHub Actions, working with Terraform plans, and learning about Terraform Cloud, this day brought everything together.
The focus came from Chapter 10 of Terraform: Up & Running by Yevgeniy Brikman: how application delivery and infrastructure delivery can follow the same disciplined workflow.
The big lesson was this:
Infrastructure should not be deployed casually.
It should move through the same kind of process we already expect from application code:
- version control
- pull request review
- automated checks
- immutable artifacts
- policy enforcement
- controlled deployment
- verification
- rollback planning
For Day 22, I built a standalone staging webserver cluster and wired it into an integrated Terraform workflow.
GitHub reference:
Day 22 Code
The Integrated Pipeline
The goal was to combine the application workflow and infrastructure workflow into one complete delivery process.
For application code, a team might produce a Docker image or binary as the build artifact.
For infrastructure code, the equivalent artifact is a saved Terraform plan file.
That was the main idea I practiced today.
My Day 22 workflow looked like this:
- Write Terraform code in Git.
- Open a pull request.
- Run automated checks.
- Generate a saved Terraform plan.
- Upload the plan as a CI artifact.
- Review the plan, blast radius, and rollback notes.
- Merge the PR.
- Apply the reviewed plan.
- Verify the infrastructure.
- Destroy the lab resources after testing.
The GitHub Actions workflow had two main jobs.
The first job handled validation:
- terraform fmt -check -recursive
- terraform init -backend=false
- terraform validate
- terraform test
The second job handled planning:
terraform plan -out=ci.tfplan
Then the workflow uploaded the plan file as an artifact.
That is important because the plan becomes the thing reviewers inspect before apply.
What I Deployed
For Day 22, I created a standalone staging stack under:
day_22/live/staging
The stack used reusable modules under:
day_22/modules
The plan showed:
Plan: 16 to add, 0 to change, 0 to destroy.
The resources included:
- Application Load Balancer
- ALB listener and listener rule
- Target group
- Auto Scaling Group
- Launch template
- Security groups and rules
- SNS topic
- CloudWatch CPU alarm
- CloudWatch ASG capacity alarm
After applying the reviewed plan, the application returned:
Hello from Day 22 integrated workflow
That confirmed the workflow moved from PR review to real AWS infrastructure successfully.
Immutable Artifact Promotion
This was the most important concept for me today.
In application delivery, teams often build one artifact and promote it through environments.
For example:
Docker image v1.5.0 -> staging -> production
The artifact does not change between environments. That makes releases predictable.
Terraform has a similar idea:
terraform plan file -> review -> apply
The saved plan file represents the exact infrastructure changes Terraform intends to make.
Instead of running a fresh, unreviewed apply, we apply the saved plan:
terraform plan -out=day22-reviewed.tfplan
terraform apply day22-reviewed.tfplan
That means what was reviewed is what gets applied.
One important lesson: saved plans can become stale.
If Terraform says:
Saved plan is stale
that is not a failure. It is Terraform protecting you.
It means the state changed after the plan was created, so Terraform can no longer guarantee that the plan still matches reality. The right fix is to regenerate the plan, review it again, and apply the new saved plan.
That safety check is exactly why plan files matter.
Pull Request Review for Infrastructure
For application code, reviewers usually inspect a code diff.
For infrastructure code, the code diff is not enough.
A Terraform PR should also include:
- plan summary
- resources created, changed, and destroyed
- blast radius
- rollback plan
- test results
For Day 22, the PR stated:
Created: 16
Modified: 0
Destroyed: 0
The blast radius was low because the stack was standalone and used Day 22-specific names, tags, and state.
The rollback plan was simple:
terraform -chdir=day_22/live/staging destroy
That may sound basic, but writing it down matters.
Infrastructure reviews should answer one question clearly:
“If this goes wrong, what is affected and how do we recover?”
Sentinel Policies
Sentinel is Terraform Cloud’s policy-as-code framework.
Terraform validation checks whether the code is syntactically and structurally valid.
Sentinel checks whether the plan is allowed according to organizational rules.
That distinction clicked for me today.
A Terraform configuration can be valid but still unsafe.
For Day 22, I added Sentinel policy examples for three controls.
1. Allowed instance types
This policy prevents teams from using instance types outside an approved list.
Example intent:
Allow: t2.micro, t2.small, t2.medium, t3.micro, t3.small
Block: larger or unapproved instance types
This helps control cost and standardize infrastructure.
2. Required tags
This policy enforces tagging, especially:
ManagedBy = terraform
That matters because tags help with:
- ownership
- cost tracking
- cleanup
- audits
- identifying Terraform-managed resources
3. Cost estimation gate
I also added a cost policy example that blocks applies if the monthly increase is above a threshold.
Example threshold:
maximum_monthly_increase = 50.0
This turns cost awareness into a deployment control.
It is not just “we hope this does not cost too much.”
It becomes:
“This apply cannot proceed if it exceeds the approved cost increase.”
That is a powerful guardrail.
Where Application and Infrastructure Workflows Align
By Day 22, the similarities were clear.
Both workflows need:
- Git as the source of truth
- pull request review
- automated testing
- versioned releases
- promotion through environments
- deployment verification
This makes Terraform feel less like a separate special process and more like mature software delivery.
Where They Differ
The differences are where infrastructure gets serious.
Application code usually produces a running service or artifact.
Terraform changes real cloud resources.
That means:
- state must be protected
- plans must be reviewed
- applies need controlled execution
- destructive changes need extra approval
- rollback may not be instant
- tests may create real resources and cost money
A bad application deploy may return a 500 error.
A bad infrastructure deploy can delete a database, expose a service publicly, or break networking.
That is why infrastructure delivery needs more discipline, not less.
What Clicked for Me
The biggest mental shift was this:
Terraform is not just about creating infrastructure.
Terraform is about creating a safe system for changing infrastructure.
Before this challenge, it was easy to think of Terraform as a provisioning tool.
Now I see it more as a workflow tool.
The real value is not only in writing .tf files. It is in the process around them:
- plan before apply
- review the plan
- document blast radius
- test what can be tested
- protect state
- apply from trusted environments
- verify cleanup
That is what makes infrastructure reliable.
What Broke
A few things broke along the way.
Provider setup sometimes failed because plugins had to initialize correctly.
Saved plans became stale when state changed after the plan was created.
GitHub PR formatting also reminded me that documentation matters. If plan output or commands are hard to read, reviewers cannot review properly.
Even code block formatting in a PR matters when the goal is making infrastructure changes understandable.
The fix was always the same pattern:
- slow down
- inspect the actual error
- regenerate or re-run safely
- document what happened
What Surprised Me
The biggest surprise was how much of Terraform maturity is not Terraform syntax.
It is engineering discipline.
The harder parts were:
- deciding what belongs in a module
- thinking about state boundaries
- writing useful PR descriptions
- knowing when a test should be manual, unit, integration, or end-to-end
- cleaning up every test run properly
- explaining blast radius clearly
Those are the skills that turn Terraform from “scripts that create AWS resources” into real Infrastructure as Code.
Reflection on the Journey So Far
In 22 days, I moved through a lot:
- EC2
- security groups
- load balancers
- Auto Scaling Groups
- remote state
- workspaces
- reusable modules
- provider aliases
- multiple environments
- secrets handling
- production-readiness checks
- manual testing
- Terratest
- GitHub Actions
- deployment workflows
- Sentinel policies
- cost gates
But the real progress was not the number of resources created.
The real progress was learning how to think.
I now think more carefully about:
- what changes
- who reviews it
- where state lives
- what happens if apply fails
- what gets destroyed
- how cleanup is verified
- whether a future engineer can understand the workflow
That is the point of Infrastructure as Code.
Not just automation.
Repeatable, reviewable, explainable infrastructure.
Final Takeaway
Day 22 tied the whole book together for me.
Application code and infrastructure code can follow the same delivery philosophy:
reviewed change -> tested artifact -> controlled deployment -> verified result
But infrastructure needs additional safeguards because the blast radius is bigger.
The winning pattern is not “move fast and apply.”
It is:
plan carefully
review clearly
enforce policy
apply intentionally
verify everything
That is the kind of Terraform workflow I want to keep building.
Full Code
GitHub reference:
Day 22 Code
Follow My Journey
This is Day 22 of my 30-Day Terraform Challenge.
See you on Day 23.
Top comments (0)