A dev-friendly intro to Terraform, what it actually does, and why your cloud setup deserves better than manual clicks.
Introduction: Why Terraform feels like cheat codes for your cloud setup
So you’ve heard the hype. Maybe someone on your team kept muttering “Terraform this” and “Terraform that” while sipping black coffee and looking suspiciously happy about YAML-free infra.
If you’ve ever tried to manually click through AWS, GCP, or Azure dashboards, you know the horror. It’s like playing a game on hardcore mode without a save button. That’s where Terraform enters the scene like Gandalf with a main.tf
file.
Terraform is Infrastructure as Code (IaC), which is just a fancy way of saying, “What if you could version control your servers like you do your code?”
Spoiler: you can. And you should.
But here’s the thing most intros get wrong they hit you with 1000 words of jargon before you even run terraform init
. That’s not us.
This guide is designed for devs and cloud dabblers who:
- Know a bit about infra but want to stop doing it manually
- Have touched Terraform before but felt like they were staring into the void
- Want to build a real-world project using 15 core Terraform concepts (not just 12, because we’re extra like that)
We’ll keep things fun, real, and hands-on, using a slightly chaotic but totally functional AWS project you can spin up and destroy like a god. You’ll understand not just what each Terraform concept does, but why it matters.
Section 1: Terraform in 60 seconds
understand what terraform really is without sounding like a cloud salesperson
Imagine this: you need to set up an EC2 instance, a database, a VPC, and an S3 bucket on AWS.
You could:
- Spend an hour clicking through the AWS dashboard
- Forget one security group rule
- Get yelled at in prod
- Repeat the whole thing for dev and staging
OR you could write everything in a .tf
file, run a few commands, and boom your infrastructure is up like magic.
Terraform is a declarative infrastructure as code tool created by HashiCorp. Instead of telling the cloud how to do something, you declare what you want (like “give me a server in us-east-1”) and Terraform figures out the rest.
Think of it like writing a recipe and letting a robotic chef make your dinner every time exactly the same way no forgotten salt, no burnt toast.
Terraform works with:
- Providers (like AWS, GCP, Azure, GitHub, even Cloudflare)
- Resources (like instances, buckets, networks)
- State (so it remembers what you’ve built)
The beauty? You can:
- Version control your infra
- Collaborate with teams
- Reuse modules
- Destroy and rebuild environments with confidence
In other words, Terraform turns you into the infrastructure wizard you always pretended to be in meetings.
Section 2: Providers the plug-in system of dreams
how terraform talks to AWS, GCP, Azure, and everything in between
Terraform doesn’t magically spin up infrastructure out of nowhere (although that would be rad). It needs providers think of them as API adapters that let Terraform speak the native language of whatever service you’re using.
Each provider knows:
- How to authenticate
- What resources are available
- How to create/update/destroy those resources
The most popular ones?
aws
-
google
(for GCP) -
azurerm
(Azure) kubernetes
cloudflare
- Even GitHub, Datadog, Stripe, and more
Here’s what a basic AWS provider config looks like:
hcl
provider "aws" {
region = "us-east-1"
access_key = var.aws_access_key
secret_key = var.aws_secret_key
}
Pro tip: Never hardcode your secrets. Use environment variables or a terraform.tfvars
file (and add that to .gitignore
or face the wrath of leaked keys on GitHub).
You can even configure multiple providers in one project like AWS + GitHub for spinning up infra and configuring a repo at the same time. Multi-cloud? Yeah, Terraform’s cool with that.
And if you’re using Terraform v1.0+, dependency resolution between providers got smoother than a cold brew on a Monday morning.
TL;DR: Without providers, Terraform is just a fancy text editor. With providers, it’s your personal cloud butler.
Next up: let’s talk resources the Lego blocks of your infrastructure.
Section 3: Resources the blocks that build everything
the real stars of the show (a.k.a. the stuff that actually spins up in the cloud)
If Terraform were a game engine, resources would be the in-game assets EC2 instances, S3 buckets, RDS databases, Lambda functions, VPCs, etc. These are the real-world things Terraform builds for you.
Here’s the anatomy of a Terraformresource
:
hcl
resource "aws_s3_bucket" "game_save_storage" {
bucket = "my-tf-bucket-of-doom"
acl = "private"
}
Let’s break that down:
-
resource
= keyword to define a thing -
"aws_s3_bucket"
= type of resource (defined by provider) -
"game_save_storage"
= your nickname for this resource (can be anything) - The block inside defines configuration options
Every time you define a resource, you’re telling Terraform:
“I want this thing to exist in this way.”
Then Terraform adds it to the execution plan and builds it when you run terraform apply
.
You can reference resources elsewhere using their names, like:
hcl
bucket = aws_s3_bucket.game_save_storage.id
That’s how your infra becomes modular, connected, and way less error-prone than clicking your way through 13 AWS tabs.
Hot tip: Want to see what resources are available for your provider? Check the Terraform Registry.
So now you know how to create things. But what if you want your configuration to be more flexible and reusable? That’s where variables come in.
Section 4: Variables give your code some flexibility
why hardcoding is for amateurs and variables are your infra superpower
Let’s be honest hardcoding stuff like regions, AMIs, or instance types feels good… until you have to change it in 12 places. Welcome to the magic of variables, Terraform’s way of making your infrastructure DRY (Don’t Repeat Yourself).
What are variables?
Terraform supports:
- Input variables → parameters you pass in
- Environment variables → secrets or settings from your shell
- Locals → calculated values (like mini functions)
Here’s how you define an input variable:
hcl
variable "region" {
description = "The AWS region to deploy into"
type = string
default = "us-east-1"
}
And here’s how you use it:
hcl
provider "aws" {
region = var.region
}
Variables are loaded from:
-
.tfvars
files (likedev.tfvars
) - CLI flags (
terraform apply -var="region=us-west-2"
) - Environment (
TF_VAR_region=us-west-2
)
This makes it stupid easy to switch environments, scale configurations, and collaborate without overwriting each other’s infra.
Why devs love this:
- No more find-and-replace chaos
- Cleaner code
- Real-world automation support (CI/CD, multiple accounts, different regions, etc.)
You can also pass maps, lists, booleans, and complex objects so your Terraform code becomes as expressive as a real programming language (but with fewer semicolons).
In short: Variables = Terraform’s config superpowers.
Next up, let’s talk about how your code can talk back to you: outputs.
Section 5: Outputs talking back from the void
how to make terraform tell you what it just did (and why it matters)
Terraform doesn’t just build stuff it can also report back useful info, like the public IP of a server or the ARN of an S3 bucket. That’s where outputs come in.
What’s an output?
An output is a way to grab some value from your resources and display it after you run terraform apply
.
Here’s a simple example:
output "instance_ip" {
description = "Public IP of the EC2 instance"
value = aws_instance.web.public_ip
}
When you apply your plan, Terraform spits out:
instance_ip = "44.207.12.34"
Why outputs are awesome:
- You get immediate feedback after a deploy
- You can feed these values into other systems (like scripts or CI/CD pipelines)
- They’re essential when working with modules, where outputs can be passed between different pieces of infra like hot gossip
You can even mark outputs as sensitive to prevent secrets from being displayed in the CLI:
hcl
output "db_password" {
value = aws_secretsmanager_secret.db_pass.secret_string
sensitive = true
}
And if you want to access outputs programmatically?
terraform output -json
Great for scripts or tools that need to extract data from your state file in a machine-readable way.
Outputs are like Terraform’s mic drop “Here’s what I built. You’re welcome.”
Now that your infra can talk back, let’s level up with modules the way to stop copy-pasting and start building like a pro.
Section 6: Modules code reusability on steroids
stop copying and pasting the same tf files and start thinking like a dev again
By now, your Terraform config is probably starting to look… chunky.
That’s when you know it’s module time.
So, what’s a module?
A module is just a folder with .tf
files inside that defines a piece of infrastructure like a reusable component.
It’s basically Terraform’s version of a function.
For example, you could create a vpc_module
that sets up:
- A VPC
- Subnets
- Route tables
- Gateways
And then reuse that VPC module for dev, staging, and prod environments without rewriting a single line of VPC config.
Local module usage:
hcl
module "network" {
source = "./modules/vpc"
cidr_block = "10.0.0.0/16"
environment = "dev"
}
You can also use remote modules from the Terraform Registry:
hcl
module "s3" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "~> 3.0"
bucket = "my-awesome-bucket"
}
Why modules matter:
- Avoid duplication
- Enforce standards across teams
- Make big infra projects manageable
- Easier onboarding (new devs only need to know inputs/outputs)
💡Pro tip: Always define variables.tf
, outputs.tf
, and README.md
in your modules. Make it dummy-proof. Future-you will thank you.
You’re now thinking like an infra dev modular, clean, reusable.
Next up: let’s talk about Terraform’s memory the state file that keeps track of everything.
Section 7: State the Terraform brain
how terraform remembers what you built so it doesn’t destroy your weekend
Every time Terraform runs, it needs to remember what exists and what doesn’t. That’s what the state file is for it’s Terraform’s brain.
What is the state file?
It’s a file named terraform.tfstate
that stores:
- What resources you’ve created
- Their current configuration
- Any dependencies between them
Without this file, Terraform would be like Dory from Finding Nemo constantly forgetting everything and rebuilding stuff from scratch.
Why you should care:
- Terraform uses state to calculate deltas what needs to be created, changed, or destroyed
- It’s critical for safe infrastructure changes
- Losing it is basically nuking your cloud memory
Local vs remote state
By default, Terraform saves state locally not great for teams or production.
Instead, use remote state with backends like:
- AWS S3 + DynamoDB (locking and versioning FTW)
- Terraform Cloud
- Azure Blob Storage
- Google Cloud Storage
Here’s an example of remote state with S3:
hcl
terraform {
backend "s3" {
bucket = "tf-state-bucket"
key = "project1/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "tf-locks"
}
}
This keeps state safe, centralized, and locked so your teammate doesn’t terraform apply
over your work.
Don’t ever commit terraform.tfstate
to Git.
It can contain secrets, and it will definitely ruin your day.
With state sorted, you now understand how Terraform keeps track of everything.
Let’s look at how to actually run your code the holy trinity of commands is up next.
Section 8: Terraform init, plan, apply the holy trinity
the three commands that summon or doom your infrastructure
Welcome to the three sacred commands every Terraform user must learn to chant:
terraform init
, terraform plan
, and terraform apply
.
These commands are like Pokémon evolutions. They start simple but unlock serious power.
Terraform init
the setup spell
This is the first thing you run when you start a Terraform project.
terraform init
It does four things:
- Downloads the provider plugins (like AWS or GCP)
- Sets up the backend (for remote state, if defined)
- Initializes modules (if any)
- Creates the
.terraform
directory
You only need to run this when you:
- Start a new project
- Add/change a provider
- Switch backends
Terraform plan
your dry run preview
This command shows you what will happen without actually doing anything.
terraform plan
Think of it as a safety net. It compares:
- What you want (
.tf
files) - What already exists (state)
Then gives you a summary like:
+ aws_s3_bucket.bucket
~ aws_instance.server
- aws_db_instance.db
💡 Pro tip: Save the plan to a file and apply it later:
terraform plan -out=tfplan
terraform apply tfplan
Terraform apply
pull the trigger
This is where the magic (and sometimes chaos) happens.
terraform apply
Terraform takes the plan and actually builds, changes, or destroys resources.
You’ll always be asked to confirm unless you use -auto-approve
(which is dangerous unless you’re scripting).
Summary
CommandWhat it doesterraform init
Sets up the environment and dependenciesterraform plan
Shows you what would happenterraform apply
Actually makes the changes
Use them in order, always read the plan, and never apply blind especially on Friday afternoons.
Up next: you’ve built your infra. But what if you want to delete it cleanly?
Section 9: Destroy because sometimes you need a clean slate
when it’s time to nuke your infra (on purpose)
Sometimes things go south.
Maybe your dev environment is a mess. Maybe you accidentally deployed 12 t3.large instances and your AWS bill looks like a ransom note.
Or maybe, you just want a clean slate.
That’s what terraform destroy
is for the self-destruct button.
What does it do?
terraform destroy
It reads your state file and deletes every resource it knows about. S3 buckets? Gone. EC2 instances? Obliterated. RDS databases? Bye.
Just like terraform apply
, it gives you a plan first and asks for confirmation.
Use cases for destroy
:
- Tearing down dev or test environments
- Cleaning up after demos
- Saving money
- Resetting your infra before a redeploy
Use with caution
Terraform destroy doesn’t play games. It won’t ask:
“Are you really sure you want to delete your production database?”
It just… does it.
You can target specific resources with:
terraform destroy -target=aws_instance.this_one_please
This helps when you only want to delete part of your infrastructure.
Also, be extra careful if you’re using remote state and multiple workspaces you don’t want to destroy the wrong environment by mistake.
Storytime
One Friday, a dev forgot to switch workspaces before running terraform destroy
.
Guess what went down?
Production.
Everything.
Moral of the story: Check your workspace, check your plan, check your sanity.
Terraform destroy is powerful, but with great power comes great aws_s3_bucket.bucket: Destruction complete
.
Up next, let’s talk about workspaces a misunderstood feature that might save your bacon if used correctly.
Section 10: Workspaces not what you think
terraform workspaces aren’t what your IDE thinks they are
Let’s clear something up:
Terraform workspaces ≠ VS Code workspaces
If you thought they were the same thing, don’t worry we all did at first. (Yes, even the Terraform docs make it confusing.)
What are Terraform workspaces?
Workspaces are essentially parallel state files in the same configuration.
Each workspace maintains its own version of the state.
So you can:
- Use the same
.tf
code - Spin up isolated environments like
dev
,staging
, andprod
- Keep each environment’s resources separate
By default, you’re in the default
workspace.
Creating and using workspaces:
terraform workspace new dev
terraform workspace select dev
terraform workspace list
Once selected, Terraform uses terraform.tfstate.d/dev/terraform.tfstate
to store state neatly isolating your environments.
Real-world use case:
Say you have an EC2 instance defined like this:
hcl
resource "aws_instance" "web" {
instance_type = "t3.micro"
ami = var.ami_id
tags = {
Name = "web-${terraform.workspace}"
}
}
Run apply
in the dev
workspace → instance tagged as web-dev
Switch to prod
and apply → instance tagged as web-prod
Same code, different environments. Magic.
When not to use workspaces
Workspaces are tempting, but they’re not a replacement for:
- Git branches
- Separate state backends (especially in big orgs)
- Full-blown environment isolation with different accounts
In many cases, teams prefer separate Terraform projects with separate backends for more explicit control.
TL;DR
Workspaces are great for:
- Quick environment switching
- Small projects
- Learning/demo purposes
Just don’t treat them like production-grade environment boundaries unless you really know what you’re doing.

Section 11: Data sources getting info without creating it
when you need to read existing infra without touching a thing
Not everything in your Terraform project needs to be created from scratch. Sometimes, you just want to pull information from existing resources without recreating or managing them. That’s where data sources come in.
What is a data source?
A data block lets Terraform query existing infrastructure like the latest AMI, a specific subnet ID, or a DNS zone without managing or changing it.
Think of it like read-only mode for your infrastructure.
Real-world example: fetch the latest Amazon Linux AMI
data "aws_ami" "latest_amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm--x86_64-gp2"]
}
}
Now you can use that AMI ID like this:
hcl
resource "aws_instance" "web" {
ami = data.aws_ami.latest_amazon_linux.id
instance_type = "t3.micro"
}
Boom always deploy with the most up-to-date base image, without having to manually look it up every time.
Common data sources:
-
aws_ami
→ for the latest images -
aws_subnet
/aws_vpc
→ for networking IDs -
aws_secretsmanager_secret
→ pulling secrets -
aws_caller_identity
→ useful in multi-account setups
Why use data sources?
- Dynamically reference existing infra
- Avoid hardcoding IDs
- Work better with shared infrastructure (e.g., when your VPC is managed by another team)
Pro tip: You can also use data sources in modules to keep them flexible and environment-aware.
In short, data blocks let you peek into the matrix without breaking anything. Perfect for scenarios where you’re not the infra overlord but still need to reference stuff.
Now let’s untangle how Terraform knows what needs to be built and whenit’s time to dive into dependencies.
Section 12: Dependencies not just for JS
how terraform figures out what goes first, what depends on what, and how not to break everything
Terraform is smarter than it looks.
When you define multiple resources, it builds a dependency graph behind the scenes to determine the correct order of operations — no manual sequencing needed.
But sometimes, you still need to guide it. That’s where understanding dependencies comes in.
Implicit dependencies
Terraform automatically creates implicit dependencies when:
- One resource references another
Example:
resource "aws_s3_bucket" "bucket" {
bucket = "my-bucket"
}
resource "aws_s3_bucket_policy" "policy" {
bucket = aws_s3_bucket.bucket.id
policy = jsonencode({ ... })
}
Here, Terraform knows the policy depends on the bucket so it won’t try to attach the policy until the bucket exists.
Simple. Clean. No extra work needed.
Explicit dependencies
Sometimes, you reference nothing but still need to control the order. That’s when you use depends_on
:
hcl
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = "t3.micro"
depends_on = [aws_security_group.web_sg]
}
This forces Terraform to wait for the security group before creating the instance even if nothing is directly referenced.
Execution plan and dependency graph
Want to see what Terraform is actually doing?
terraform graph | dot -Tpng > graph.png
This gives you a visual map of the dependency tree great for debugging or explaining infra spaghetti to your team.
When dependency hell strikes
- Circular dependencies = Terraform gets confused
-
Wrong use of
depends_on
= unnecessary delays - Missing references = resources may be created in the wrong order
Rule of thumb: Let Terraform do the heavy lifting with implicit dependencies, and only use depends_on
when you absolutely must.
Dependencies are how Terraform builds with confidence ensuring your S3 buckets exist before trying to shove files into them.
Section 13: Lifecycle rules control freak mode
how to boss terraform around when it’s being too smart for its own good
Sometimes, Terraform tries to be helpful… and ends up deleting your production database because you renamed a resource.
That’s when you say: “Alright Terraform, sit down. We’re doing this my way.”
Enter: the lifecycle
block.
What is lifecycle
?
Terraform’s lifecycle
settings give you manual control over how and when resources are created, destroyed, or replaced. It’s like giving Terraform rules of engagement.
Here’s the syntax:
hcl
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = "t3.micro"
lifecycle {
prevent_destroy = true
create_before_destroy = true
ignore_changes = [tags["Owner"]]
}
}
Prevent destroy
This one’s a safety net. If you (or a teammate) tries to delete this resource, Terraform will scream and refuse.
hcl
lifecycle {
prevent_destroy = true
}
Use this for:
- Databases
- Critical prod infrastructure
- Your mental health
Create before destroy
By default, Terraform destroys a resource before creating a new one — not ideal for things like network interfaces or load balancers.
This flips the behavior:
hcl
lifecycle {
create_before_destroy = true
}
Now, Terraform builds the replacement first, then tears down the old one — smoother transitions, no downtime.
Ignore changes
Sometimes, something outside Terraform (a human or another tool) changes a resource. But you don’t want Terraform to freak out.
hcl
lifecycle {
ignore_changes = [tags, instance_type]
}
This tells Terraform:
“Yes, I know the tags changed. Chill.”
Use this carefully it can mask problems if you ignore too much.
Real-world pro tips:
- Use
prevent_destroy
for production-only resources - Use
create_before_destroy
for zero-downtime upgrades - Don’t go wild with
ignore_changes
it’s not a get-out-of-jail-free card
The lifecycle
block gives you guardrails and override switches perfect when Terraform’s default behavior isn’t good enough.
Next up: let’s get a little creative with config generation time to explore files and templates.
Section 14: Files & Templates inject custom configs
how to generate nginx configs, startup scripts, or anything else terraform doesn’t natively handle
Let’s say you’re spinning up an EC2 instance and you want it to launch with a pre-baked config maybe an NGINX file, a cloud-init script, or even a weird bash script you hacked together at 2 AM.
Terraform can’t directly write files to your servers, but it can render templates and pass them as user_data
, config maps, or parameters.
Enter: templatefile()
.
The templatefile()
function
Terraform lets you load external files and inject dynamic values:
Step 1: Create a template file (e.g. nginx.tpl
):
nginx
server {
listen 80;
server_name ${domain_name};
location / {
root /var/www/html;
}
}
Step 2: Load and render it in Terraform:
h
data "template_file" "nginx_config" {
template = file("${path.module}/nginx.tpl")
vars = {
domain_name = "example.com"
}
}
Or, even simpler in newer versions:
hcl
templatefile("${path.module}/nginx.tpl", {
domain_name = "example.com"
})
Real-world use cases:
- Cloud-init or startup scripts for EC2
- Helm chart values
- Kubernetes manifests
- NGINX, Apache, or app config files
- Any bash/powershell voodoo you don’t want inline
Pro tips:
- Keep templates small and modular
- Don’t write an entire Ansible playbook inside a
.tpl
this is still Terraform - Use
join()
,for
, andif
expressions for extra logic
💡 You can even embed template output inside user_data
to bootstrap instances on launch:
h
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = "t3.micro"
user_data = templatefile("${path.module}/cloud-init.tpl", {
domain_name = var.domain_name
})
}
Templates let you bridge the gap between Terraform and real-world provisioning, giving you flexibility without jumping into a whole new tool.
Now, to wrap up this chaos let’s talk about protecting your secrets like a responsible adult.
Section 15: Sensitive data don’t leak secrets on GitHub
how to keep your API keys safe and your repo free from shame
Let’s face it we’ve all seen it.
Someone pushes terraform.tfvars
to GitHub with aws_secret_key = "lol123"
, and suddenly they’re the star of a Twitter thread about leaked credentials.
Terraform gives you built-in ways to protect sensitive data, but you need to use them properly.
Marking outputs as sensitive
If you’re outputting something like a password or secret ARN, make sure you mask it:
hcl
output "db_password" {
value = aws_secretsmanager_secret.db_pass.secret_string
sensitive = true
}
Terraform will still pass the value through scripting, but it won’t display it in the CLI. No more “oops” moments in screen-shared meetings.
Avoid hardcoding secrets
Don’t do this:
h
access_key = "AKIAREALBADIDEA"
secret_key = "thiswillbeonleakcheckin3mins"
Instead, use environment variables or .tfvars
files that are never committed:
export TF_VAR_access_key="..."
export TF_VAR_secret_key="..."
Or keep your secrets in a secure store like:
- AWS Secrets Manager
- Vault by HashiCorp
- Parameter Store (SSM)
- GitHub Actions secrets (for CI)
.gitignore like your life depends on it
Make sure these are in your .gitignore
:
.tfstate
*.tfvars
.terraform/
crash.log
Then triple check with git status
before you push.
Bonus tool: tfsec
Want to automatically scan your Terraform code for security issues?
Check out tfsec:
brew install tfsec
tfsec .
It flags things like:
- Open security groups
- Public S3 buckets
- Unencrypted volumes
- Hardcoded secrets (👀)
TL;DR:
- Mark sensitive outputs
- Don’t hardcode secrets
- Use environment variables
- Secure your state
- Scan your code regularly
- Don’t be that developer on Reddit
Conclusion: You now speak Terraform deploy responsibly.
you’re officially dangerous in the best way
You made it. You’ve leveled up from “I think I ran terraform init
once” to “I know how to build, destroy, and template cloud infrastructure like a boss.”
Here’s a quick recap of what you’ve mastered:
- Providers and resources?
- Variables, outputs, and modules?
- State files, dependencies, and lifecycle tricks?
- Not blowing up prod by accident? Hopefully
Terraform isn’t just about spinning up EC2s and buckets it’s about infrastructure that lives in version control, scales with your team, and doesn’t freak out when someone renames a file.
And now you’ve got 15 rock-solid Terraform concepts to anchor your projects from the basics to the stuff that actually saves your skin in production.
What to do next:
- Build your own mini project something real, like a static site with S3 + CloudFront
- Try breaking and fixing your infra that’s how you learn
-
Explore more advanced features like
for_each
,dynamic
, and custom providers - Integrate Terraform into CI/CD for automated deployments
- Join the community ask questions, share configs, and brag about your S3 bucket names
Helpful resources hand-picked by devs who’ve been burned
- Terraform Docs (official) read it, love it, Ctrl+F it
- Learn Terraform by HashiCorp beginner to intermediate
- Awesome Terraform GitHub curated tools and modules
- tfsec static analysis security scanner
- Terragrunt if you want to go deeper down the rabbit hole
- Terraform Visualizer see your plans as diagrams
- Infracost see how much your changes will cost you before you apply
That’s a wrap!
Infrastructure as code is no longer optional it’s table stakes.
But with Terraform and the knowledge you now have, you’re not just writing infra you’re crafting it.
Now go forth and deploy but maybe skip Friday afternoons. 😉
Top comments (0)