Terraform is an Infrastructure as Code (IaC) tool made by HashiCorp. Instead of clicking around in the AWS or Azure console to create servers, databases, and networks, you write code that describes the infrastructure you want, and Terraform makes it happen. Automatically. Repeatably. Safely.
When I started learning Terraform, I came across a lot of interesting and unexpected things that aren’t usually explained in most tutorials.
Most guides focus on the basics, how to create resources, run plan, and apply changes. But as I went deeper, I found concepts and behaviors that really changed how I understand and use Terraform.
So in this post, I’m sharing those less obvious but important things I learned along the way, the kind of knowledge that actually makes you more confident using Terraform in real-world scenarios.
Let's dive in.
1. Implicit vs Explicit Dependencies
When Terraform creates infrastructure, it needs to know the right order to create things. You can't create a subnet before the VPC it belongs to, for example. Terraform handles this through dependencies.
Implicit Dependencies (The Automatic Way)
When one resource references an attribute of another, Terraform automatically figures out the order. You don't need to write anything extra.
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "subnet1" {
vpc_id = aws_vpc.main.id # This reference tells Terraform: VPC first!
cidr_block = "10.0.1.0/24"
}
Terraform sees that subnet1 needs aws_vpc.main.id, so it automatically creates the VPC before the subnet. Clean, simple, and the preferred approach.
Explicit Dependencies (The Manual Way)
Sometimes there's no direct attribute reference between resources, but you still need one to be created before the other. In that case, use depends_on.
resource "aws_instance" "app" {
ami = "ami-123456"
instance_type = "t2.micro"
depends_on = [aws_security_group.sg]
}
resource "aws_security_group" "sg" {
name = "app-sg"
}
Even though the instance doesn't reference the security group directly, depends_on forces Terraform to create the security group first.
| Implicit | Explicit | |
|---|---|---|
| Setup | Automatic | Manual (depends_on) |
| When to use | Most of the time | Hidden or indirect dependency |
| Recommended | Yes | Only when needed |
Simple Rule: If Terraform can detect the relationship → use implicit. If it can't → use
depends_on.
2. Managing State with terraform state
Terraform keeps a state file that tracks all the infrastructure it manages. The terraform state command lets you inspect and manipulate this file.
Here are the subcommands you'll use most often:
# See everything Terraform is currently managing
terraform state list
# Inspect detailed attributes of one resource
terraform state show aws_instance.my_ec2
# Rename a resource without recreating it (useful when refactoring code)
terraform state mv aws_instance.old aws_instance.new
# Stop Terraform from managing a resource (the real resource still exists!)
terraform state rm aws_instance.my_ec2
# Download the raw state file (useful for backups)
terraform state pull
# Upload a state file (use with extreme caution)
terraform state push terraform.tfstate
# Change provider references inside state
terraform state replace-provider \
registry.terraform.io/hashicorp/aws \
registry.terraform.io/custom/aws
| Command | What It Does |
|---|---|
list |
Show all tracked resources |
show |
Inspect one resource in detail |
mv |
Rename/move without recreating |
rm |
Remove from state (cloud resource stays!) |
pull |
Download state JSON |
push |
Upload state (risky — overwrites!) |
replace-provider |
Swap provider namespace |
⚠️ Always back up your state file before making manual changes. A corrupted state file is one of the worst things that can happen to a Terraform project.
3. Heredoc Syntax
Sometimes you need to pass a multi-line string into a resource — like a shell script for an EC2 instance's startup commands, or a JSON config block. That's where heredoc syntax comes in.
Basic Heredoc
resource "aws_instance" "example" {
user_data = <<EOF
#!/bin/bash
echo "Hello, World"
apt update
apt install -y nginx
EOF
}
Everything between <<EOF and EOF is treated as a single string.
Indented Heredoc (Recommended)
Using <<-EOF (note the dash) lets you indent the content for cleaner, more readable code:
resource "aws_instance" "example" {
user_data = <<-EOF
#!/bin/bash
echo "Cleaner indentation"
apt update
EOF
}
With Variable Interpolation
You can use Terraform variables inside a heredoc:
variable "app_name" {
default = "MyApp"
}
output "welcome_message" {
value = <<-EOF
Hello from ${var.app_name}
Terraform is managing this infrastructure.
EOF
}
💡 Tips: The closing delimiter must be on its own line with no trailing spaces. Use
<<-EOFwhenever you can — it keeps your code visually clean.
4. Provisioners
Provisioners let you run scripts or commands on a resource after it's created (or before it's destroyed). Think of them as a way to do last-mile setup that Terraform's declarative model doesn't cover.
⚠️ Important: Terraform itself recommends treating provisioners as a last resort. Prefer native options like
user_data, cloud-init, or configuration management tools like Ansible whenever possible.
local-exec — Runs on Your Machine
resource "aws_instance" "example" {
ami = "ami-123456"
instance_type = "t2.micro"
provisioner "local-exec" {
command = "echo 'Instance created at ${self.public_ip}'"
}
}
Use this to trigger local scripts, send notifications, or log events.
remote-exec — Runs Inside the Resource
resource "aws_instance" "example" {
ami = "ami-123456"
instance_type = "t2.micro"
connection {
type = "ssh"
user = "ubuntu"
private_key = file("key.pem")
host = self.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo apt update",
"sudo apt install -y nginx"
]
}
}
file — Copies Files to the Resource
provisioner "file" {
source = "app.conf"
destination = "/tmp/app.conf"
}
Destroy-Time Provisioner
provisioner "local-exec" {
when = destroy
command = "echo 'Cleaning up before destroy'"
}
| Provisioner | Runs Where | Common Use |
|---|---|---|
local-exec |
Your local machine | Notifications, logging, local scripts |
remote-exec |
Inside the created resource | Package installs, service config |
file |
Local → Remote | Upload config files or scripts |
5. Provisioner Behavior
Understanding when and how provisioners run is important — especially because they can behave in surprising ways.
They only run on creation (or recreation), not every apply.
If you change a tag on an EC2 instance, Terraform updates the tag — but it does not re-run any provisioners. Provisioners only re-run if the resource is destroyed and recreated.
They run in order.
If you define multiple provisioners on one resource, they execute sequentially from top to bottom.
Failure stops everything by default.
provisioner "remote-exec" {
inline = ["exit 1"]
on_failure = continue # use "continue" to ignore errors, "fail" to stop (default)
}
They are NOT tracked in state.
Terraform records that a resource exists, but it has no idea what your provisioner actually changed inside that resource. This makes provisioners hard to reason about over time.
Key insight: Provisioners break Terraform's clean declarative model. The more you rely on them, the harder your infrastructure becomes to maintain and reproduce.
6. Taint and Replace
Sometimes a resource ends up in a bad state — a provisioner failed midway through, someone manually changed it, or it's just broken. You need to force Terraform to destroy and recreate it.
The Old Way: terraform taint (Deprecated)
terraform taint aws_instance.my_ec2 # Mark as "needs replacement"
terraform apply # Destroys + recreates on next apply
terraform untaint aws_instance.my_ec2 # Change your mind? Undo it.
The Modern Way: -replace Flag (Recommended)
terraform apply -replace="aws_instance.my_ec2"
This does the same thing — destroys and recreates — but in a single step, with no intermediate state change. It's cleaner and less error-prone.
taint (legacy) |
-replace (modern) |
|
|---|---|---|
| Steps | 2 (taint + apply) | 1 |
| Recommended | No | Yes |
⚠️ Recreating a resource causes downtime. Always run
terraform planfirst to understand the full impact before applying.
7. Debugging Terraform
When things go wrong (and they will), here's how to figure out what's happening.
Enable Detailed Logs
export TF_LOG=DEBUG
export TF_LOG_PATH=terraform.log
terraform apply
Log levels from least to most verbose: ERROR → WARN → INFO → DEBUG → TRACE
Use DEBUG for most issues. Only reach for TRACE when you're really stuck — it's extremely noisy.
Your Standard Debug Toolkit
terraform validate # Check for syntax errors
terraform fmt # Auto-fix formatting (easier to spot mistakes)
terraform plan # See what Terraform wants to do
terraform state list # What is Terraform managing?
terraform state show <resource> # What are the actual values?
The Interactive Console
terraform console
This opens a REPL where you can test expressions interactively:
> var.environment
"production"
> length(var.subnet_ids)
3
> aws_instance.web.public_ip
"54.23.11.100"
It's incredibly useful for debugging variables, expressions, and outputs without running a full apply.
Quick Debug Reference
| Problem | Where to Look |
|---|---|
| Resource not created |
terraform plan + check dependencies |
| Wrong values |
terraform console + check variable definitions |
| Provisioner fails |
TF_LOG=DEBUG + check SSH connection |
| Unexpected changes |
terraform state show vs real infra |
| Provider errors | Check authentication and permissions |
💡 Never commit your debug log files — they may contain API keys and other secrets.
8. Importing Existing Infrastructure
You've probably inherited some infrastructure that was created manually — either by clicking around in the AWS console or by a script. terraform import lets you bring that existing infrastructure under Terraform's management.
Important: Import only adds the resource to Terraform's state. It does NOT generate
.tfcode for you, and it doesn't touch the real infrastructure.
The Classic Import Workflow
# Step 1: Write the resource config in your .tf file
resource "aws_instance" "my_ec2" {
ami = "ami-xxxx"
instance_type = "t2.micro"
}
# Step 2: Import the real resource into state
terraform import aws_instance.my_ec2 i-1234567890abcdef0
# Step 3: Run plan — you'll probably see differences
terraform plan
# Update your .tf config to match until plan shows no changes
The Modern Way: Import Blocks (Terraform 1.5+)
import {
to = aws_instance.my_ec2
id = "i-1234567890abcdef0"
}
Then just run terraform apply. This approach is declarative, version-controlled, and much cleaner.
| Feature | Classic Import | Import Block (1.5+) |
|---|---|---|
| Declarative | No | Yes |
| Version controlled | No | Yes |
| Recommended | For older setups | Preferred |
9. Modules
As your infrastructure grows, putting everything in one giant main.tf file becomes unmanageable. Modules are Terraform's solution — they let you organize, reuse, and share infrastructure code.
Think of a module like a function: it takes inputs (variables), does some work (creates resources), and returns outputs.
Module Directory Structure
modules/
└── webserver/
├── main.tf # The actual resources
├── variables.tf # Input variables
└── outputs.tf # Values exposed to the caller
Defining a Module
# modules/webserver/variables.tf
variable "instance_type" {
type = string
default = "t2.micro"
}
variable "ami" {
type = string
}
# modules/webserver/main.tf
resource "aws_instance" "web" {
instance_type = var.instance_type
ami = var.ami
}
# modules/webserver/outputs.tf
output "public_ip" {
value = aws_instance.web.public_ip
}
Calling a Module
# root main.tf
module "web" {
source = "./modules/webserver"
instance_type = "t2.micro"
ami = "ami-123456"
}
# Use the module's output
output "server_ip" {
value = module.web.public_ip
}
Module Sources
Modules can come from anywhere:
# Local folder
source = "./modules/webserver"
# Git repository
source = "git::https://github.com/your-org/tf-modules.git//webserver"
# Terraform Registry (public or private)
source = "hashicorp/consul/aws"
Best Practice: Keep modules small and focused on a single concern. Use variables for flexibility and outputs to expose only what callers need.
10. plan --refresh=false
By default, when you run terraform plan, Terraform does two things:
- Refresh — queries your cloud provider (AWS, Azure, etc.) to get the current real state of all resources
- Compare — checks that against your code and state file, then shows what needs to change
The --refresh=false flag skips step 1. Terraform works only from its cached state file, without making any API calls to check live infrastructure.
terraform plan # Default: queries live infra
terraform plan --refresh=false # Faster: uses cached state only
When is this useful?
- In CI/CD pipelines where speed matters and you're confident the state matches reality
- When cloud API calls are slow or rate-limited
- When you're iterating quickly on code changes and don't need drift detection
When is it risky?
- If someone has made manual changes outside Terraform, you won't detect them
- The apply could produce unexpected results if the cached state is stale
💡 For day-to-day manual runs, always keep refresh enabled. Use
--refresh=falseonly in controlled environments.
11. The file() Function
The file() function reads a local file and returns its contents as a string. It's one of the most commonly used functions in real Terraform projects.
# Pass a shell script to an EC2 instance's startup commands
resource "aws_instance" "web" {
ami = "ami-123456"
instance_type = "t2.micro"
user_data = file("scripts/setup.sh")
}
# Upload a config file to S3
resource "aws_s3_object" "config" {
bucket = aws_s3_bucket.my_bucket.id
key = "app.conf"
content = file("config/app.conf")
}
If you need dynamic content — where parts of the file change based on variables — use templatefile() instead:
user_data = templatefile("scripts/setup.sh.tpl", {
app_name = var.app_name
port = var.port
})
| Function | Input | Use When |
|---|---|---|
file() |
Static file | Content never changes |
templatefile() |
File + variables map | Content has dynamic parts |
12. Built-in Functions
Terraform includes a rich set of built-in functions for manipulating values. Here's a practical overview of each category.
Numeric Functions
abs(-5) # → 5 (absolute value)
ceil(2.3) # → 3 (round up)
floor(2.7) # → 2 (round down)
max(4, 7, 2) # → 7
min(4, 7, 2) # → 2
pow(2, 3) # → 8 (2 to the power of 3)
String Functions
upper("hello") # → "HELLO"
lower("WORLD") # → "world"
trim(" hello ") # → "hello"
replace("a-b-c", "-", "_") # → "a_b_c"
substr("Terraform", 0, 5) # → "Terra"
join("-", ["a", "b", "c"]) # → "a-b-c"
split("-", "a-b-c") # → ["a", "b", "c"]
length("hello") # → 5
Collection Functions
concat([1, 2], [3, 4]) # → [1, 2, 3, 4]
length([1, 2, 3]) # → 3
element(["a", "b", "c"], 1) # → "b"
contains([1, 2, 3], 2) # → true
distinct([1, 2, 2, 3]) # → [1, 2, 3]
flatten([[1, 2], [3, 4]]) # → [1, 2, 3, 4]
Map Functions
merge({a = 1}, {b = 2}) # → {a = 1, b = 2}
lookup({a = 1}, "b", 0) # → 0 (returns default if key missing)
keys({a = 1, b = 2}) # → ["a", "b"]
values({a = 1, b = 2}) # → [1, 2]
zipmap(["a", "b"], [1, 2]) # → {a = 1, b = 2}
Type Conversion Functions
tostring(10) # → "10"
tonumber("5") # → 5
tolist([1, 2]) # → [1, 2]
tomap({a = 1}) # → {a = 1}
Real-World Example Combining Functions
variable "names" {
default = ["alice", "bob", "alice", "carol"]
}
output "unique_upper_names" {
value = [for n in distinct(var.names) : upper(n)]
}
# Result: ["ALICE", "BOB", "CAROL"]
13. Operators and Conditional Expressions
Arithmetic Operators
5 + 3 # → 8
5 - 3 # → 2
5 * 3 # → 15
10 / 2 # → 5
10 % 3 # → 1 (remainder)
2 ** 3 # → 8 (exponent)
Comparison Operators
5 == 5 # → true
5 != 3 # → true
5 > 3 # → true
3 < 5 # → true
5 >= 5 # → true
3 <= 5 # → true
Logical Operators
true && false # → false (AND)
true || false # → true (OR)
!true # → false (NOT)
The Ternary (Conditional) Expression
Terraform uses the same ternary pattern as many programming languages:
condition ? value_if_true : value_if_false
variable "environment" {
default = "prod"
}
output "instance_type" {
value = var.environment == "prod" ? "t2.large" : "t2.micro"
}
# prod → "t2.large", anything else → "t2.micro"
Cleaner Pattern: Map Lookup
For more than two options, nested ternaries get messy fast. A map lookup is far more readable:
locals {
instance_sizes = {
prod = "t2.large"
stage = "t2.medium"
dev = "t2.micro"
}
}
output "instance_type" {
value = local.instance_sizes[var.environment]
}
💡 Prefer map lookups over nested ternaries. They are easier to read, test, and extend when you add new environments.
14. Workspaces
Workspaces let you use a single Terraform configuration to manage multiple separate environments — each with its own isolated state file.
Every Terraform project starts with one workspace called default. You can create more as needed.
terraform workspace list # * default (the * shows current)
terraform workspace new dev # Create and switch to "dev"
terraform workspace select staging # Switch to "staging"
terraform workspace show # Print current workspace name
terraform workspace delete dev # Delete (can't delete current)
Using Workspace Name in Resources
resource "aws_s3_bucket" "app" {
bucket = "myapp-${terraform.workspace}-data"
}
When the workspace is dev, this creates myapp-dev-data. When it's prod, it creates myapp-prod-data. Same code — separate, isolated infrastructure.
When to Use Workspaces (and When Not To)
Good fit: Simple dev/staging/prod splits where all environments use almost the same config.
Not a good fit: Large, complex environments with significantly different configurations or many dependencies. For those, use separate directories with separate state backends.
15. Mutable vs Immutable Infrastructure
This is one of the most important conceptual distinctions in modern DevOps.
Mutable Infrastructure
You create a server once, and then change it in place over time — installing updates, patching software, modifying configs.
# Change instance_type → Terraform updates the existing instance
resource "aws_instance" "web" {
instance_type = "t2.small" # was t2.micro
}
Pros: Faster individual updates, less resource churn.
Cons: Over time, each server accumulates a unique history of changes. This is called configuration drift, and it makes environments hard to reproduce and debug.
Immutable Infrastructure
Instead of changing a server, you replace it entirely with a new one built from a fresh image.
# Change AMI → Terraform destroys old instance, creates new one
resource "aws_instance" "web" {
ami = "ami-new-version-456" # was ami-old-version-123
}
Pros: Environments are consistent and reproducible. Rollbacks are easy — just deploy the previous version. No drift.
Cons: Requires automation maturity (image baking with Packer, CI/CD pipelines).
Simple way to remember:
Mutable → "Fix the server"
Immutable → "Replace the server"
Modern DevOps strongly favors immutable infrastructure. Terraform makes this natural when combined with tools like Packer and Auto Scaling Groups.
16. Configuration Drift
Configuration drift is the gap between what your Terraform code says your infrastructure should look like, and what it actually looks like in the cloud.
It happens when changes are made outside of Terraform:
- Someone SSHes into a server and changes a config file
- A developer resizes an instance through the AWS console
- An automated script modifies a resource directly
- A hotfix is applied directly to production
Why it's a problem:
Drift means your infrastructure is no longer reproducible. If you need to rebuild, you'll get something different from what's running. It also means bugs that only exist in drifted environments, which are notoriously hard to track down.
How Terraform detects it:
terraform plan
Terraform compares your code (desired state) against live infrastructure (actual state) and shows you the differences. Run this regularly — it's your drift detector.
How to prevent it:
- Use immutable infrastructure (replace, don't patch)
- Route all changes through Terraform and CI/CD — no manual console edits
- Run
terraform planon a schedule to catch drift early - Limit direct SSH access to servers
17. Lifecycle Rules
The lifecycle block gives you control over how Terraform handles a resource's creation, update, and deletion.
resource "aws_instance" "web" {
ami = "ami-123"
instance_type = "t2.micro"
lifecycle {
create_before_destroy = true
prevent_destroy = true
ignore_changes = [tags]
replace_triggered_by = [aws_ami.new_image]
}
}
create_before_destroy
By default, when Terraform needs to replace a resource, it destroys the old one first, then creates the new one. This causes downtime.
lifecycle {
create_before_destroy = true
}
With this flag, Terraform creates the new resource first, then destroys the old one. Much better for production.
prevent_destroy
Protect critical resources from accidental deletion:
lifecycle {
prevent_destroy = true
}
If anyone runs terraform destroy targeting this resource, Terraform throws an error instead of deleting it. Essential for databases and storage.
ignore_changes
Tell Terraform to ignore changes to specific attributes:
lifecycle {
ignore_changes = [tags, instance_type]
}
Useful when an external system modifies a resource (like an auto-scaler changing instance counts), and you don't want Terraform to fight it.
⚠️ Use
ignore_changescarefully. It can hide real configuration problems by instructing Terraform to look away.
replace_triggered_by
Force a resource to be recreated when something else changes:
lifecycle {
replace_triggered_by = [aws_ami.new_image]
}
When the AMI changes, this instance will be rebuilt — even if the instance's own config didn't change.
| Rule | What It Does | Best Used For |
|---|---|---|
create_before_destroy |
New before old is deleted | Zero-downtime updates |
prevent_destroy |
Blocks accidental deletion | Databases, critical storage |
ignore_changes |
Ignores drift on listed attributes | Externally managed attributes |
replace_triggered_by |
Forces rebuild on dependency change | Immutable infra patterns |
18. Meta-Arguments
Meta-arguments are special Terraform arguments that control how Terraform manages a resource — not what the resource itself looks like. They work on any resource type.
count — Create Multiple Copies
resource "aws_instance" "web" {
count = 3
instance_type = "t2.micro"
ami = "ami-123456"
}
# Creates: web[0], web[1], web[2]
for_each — Create Named Resources from a Map
resource "aws_instance" "web" {
for_each = {
frontend = "t2.micro"
backend = "t2.small"
worker = "t2.medium"
}
instance_type = each.value
ami = "ami-123456"
}
# Creates: web["frontend"], web["backend"], web["worker"]
depends_on — Force Explicit Ordering
resource "aws_instance" "app" {
ami = "ami-123456"
instance_type = "t2.micro"
depends_on = [aws_security_group.app_sg]
}
lifecycle — Customize Resource Behavior
(Covered in detail in the previous section.)
provider — Use a Specific Provider Configuration
resource "aws_instance" "eu_server" {
provider = aws.eu_west
ami = "ami-eu-123"
instance_type = "t2.micro"
}
Useful for multi-region setups where you have multiple provider aliases configured.
| Meta-Argument | Purpose |
|---|---|
count |
Create N copies (index-based) |
for_each |
Create named resources from map/set |
depends_on |
Force dependency order |
lifecycle |
Customize create/update/delete behavior |
provider |
Choose provider alias/configuration |
19. for_each vs count
Both count and for_each create multiple resources, but they behave very differently when you make changes — and the difference matters a lot in production.
The Problem with count
variable "servers" {
default = ["web", "api", "worker"]
}
resource "aws_instance" "servers" {
count = length(var.servers)
instance_type = "t2.micro"
ami = "ami-123456"
}
# Creates: servers[0] (web), servers[1] (api), servers[2] (worker)
Now imagine you need to remove "api" from the middle:
default = ["web", "worker"] # removed "api"
Terraform sees that servers[1] now should be "worker" (which used to be servers[2]). The result: it destroys and recreates both servers[1] and servers[2]. You wanted to delete one server, but you accidentally triggered a replacement of two.
The Solution: for_each
resource "aws_instance" "servers" {
for_each = {
web = "t2.micro"
api = "t2.small"
worker = "t2.medium"
}
instance_type = each.value
ami = "ami-123456"
}
# Creates: servers["web"], servers["api"], servers["worker"]
Remove api:
for_each = {
web = "t2.micro"
worker = "t2.medium"
}
Terraform deletes only servers["api"]. The others are untouched.
count |
for_each |
|
|---|---|---|
| Identifier | Index (0, 1, 2) | Key ("web", "api") |
| Remove middle item | Can shift and recreate others | Only that item deleted |
| Best for | Identical, interchangeable resources | Named, distinct resources |
| Production use | Use with caution | Strongly preferred |
🔥 Rule of thumb: Default to
for_each. Usecountonly for truly identical resources where order doesn't matter.
20. Version Constraints
Terraform configurations can specify which versions of Terraform itself and its providers are allowed. This is crucial for keeping deployments stable and preventing breaking changes from sneaking in during upgrades.
terraform {
required_version = ">= 1.3.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
Constraint Operators Explained
version = "= 5.0.0" # Exactly this version — nothing else
version = "!= 5.0.0" # Anything except this version
version = ">= 5.0.0" # This version or newer (⚠️ can reach 6.x — risky!)
version = "< 6.0.0" # Must be below this version
version = "~> 5.0" # >= 5.0.0 and < 6.0.0 (safe minor upgrades)
version = "~> 5.2.1" # >= 5.2.1 and < 5.3.0 (safe patch upgrades only)
The ~> operator (called the pessimistic constraint operator) is the industry standard. It allows safe incremental upgrades while blocking potentially breaking major version changes.
Why This Matters
Without version constraints, a terraform init today and another six months from now might pull completely different provider versions. Your infrastructure code could start behaving differently without you changing a single line.
With proper constraints, everyone on the team — and every CI/CD run — uses compatible versions.
💡 Best Practice: Use
~> x.yfor providers (allows patch and minor updates, blocks major). Pin your Terraform version with>= x.y.z, < (next major)for similar safety.
Wrapping Up
You've just covered the full breadth of core Terraform concepts — from how it tracks infrastructure with state, to how it handles dependencies, functions, modules, and deployments.
Here's a quick summary of the most important ideas to internalize:
-
Use implicit dependencies whenever possible; reach for
depends_ononly when needed - Treat state as sacred — back it up, don't edit it manually without care
-
Provisioners are a last resort — prefer
user_dataand proper config management tools - Modules are how you scale — small, reusable, single-purpose
-
Prefer
for_eachovercountin almost every production situation - Immutable infrastructure + lifecycle rules = stable, reproducible deployments
- Version constraints protect you from surprise breaking changes
Terraform rewards consistent habits. The teams that get the most out of it are the ones that commit to: all changes through code, no manual console edits, modules for reuse, and regular terraform plan runs to catch drift early.
Happy building! 🚀
Okay, that’s it for this article.
Also, if you have any questions about this or anything else, please feel free to let me know in a comment below or on Instagram , Facebook or Twitter.
Thank you for reading this article, and see you soon in the next one! ❤️













Top comments (0)