Most backend developers write code that runs on infrastructure someone
else built. They open a ticket, wait for an ops team to provision a
database, and get a connection string back two days later. I wanted to
understand what that ops team actually does so I built it myself.
This is a complete local infrastructure setup that provisions everything
with one command and tears it all down just as cleanly.
What gets provisioned
Running terraform apply creates a PostgreSQL database in Docker with its
own network, a Kubernetes namespace with environment labels, a full
Deployment, Service and Ingress for the application, a ConfigMap for
configuration, Kubernetes Secrets for sensitive values, and a complete
Prometheus and Grafana monitoring stack.
Running terraform destroy removes all of it. No manual cleanup, no
orphaned containers, no forgotten resources sitting idle.
Why Terraform instead of kubectl or Docker Compose
Docker Compose is great for running services locally but it has no
concept of state. Run it twice and you get duplicate containers. If
something fails halfway through you have no way of knowing what was
created and what was not.
kubectl applies manifests but has no dependency management. You need to
know the correct order to apply things and rolling back means manually
hunting down and deleting resources one by one.
Terraform tracks state. It knows what exists, what needs creating, and
what needs updating. Run it ten times and it only makes changes when
something is actually different. That predictability is what makes it
safe to use in production.
The module structure
I split the infrastructure into three modules so each piece is
independently testable and reusable:
module "postgres" {
source = "./modules/postgres"
}
module "kubernetes" {
source = "./modules/kubernetes"
depends_on = [module.postgres]
}
module "monitoring" {
source = "./modules/monitoring"
depends_on = [module.kubernetes]
}
The depends_on blocks enforce ordering. Kubernetes waits for PostgreSQL,
monitoring waits for Kubernetes. Terraform handles the sequencing
automatically so you never have to think about it.
Secrets management
One thing that catches people out with Kubernetes is putting secrets in
ConfigMaps. ConfigMaps are not encrypted. They are just base64 encoded
which is not the same thing at all.
I store sensitive values in Kubernetes Secrets and non-sensitive
configuration in ConfigMaps, then mount both into the container using
env_from. The application reads everything as environment variables
without needing to know where they came from. Swapping the secret source
from a Kubernetes Secret to HashiCorp Vault requires no application code
changes at all.
Monitoring with Helm
Rather than writing Kubernetes manifests for Prometheus and Grafana from
scratch, I used the kube-prometheus-stack Helm chart. This is the same
chart used in production Kubernetes clusters at companies running
thousands of pods.
resource "helm_release" "prometheus_stack" {
name = "prometheus-stack"
repository = "https://prometheus-community.github.io/helm-charts"
chart = "kube-prometheus-stack"
namespace = kubernetes_namespace.monitoring.metadata[0].name
}
Once applied, Grafana comes pre-loaded with dashboards for CPU usage,
memory consumption, pod restarts and network traffic without any manual
configuration. You open the browser and the data is already there.
The plan before the apply
One of Terraform's most useful features is terraform plan. Before making
any changes it shows you exactly what will be created, modified or
destroyed. In a team environment this output goes into a pull request so
everyone can see what infrastructure is about to change before it
happens. No surprises, no rollbacks, no 2am incident calls because
someone applied changes directly to production.
What I learned
Helm charts save hours. Writing Kubernetes manifests for Prometheus from
scratch would take days and require deep knowledge of how everything
wires together internally. The Helm chart encapsulates all of that and
exposes only the configuration you actually care about.
State is what makes infrastructure tools production ready. Docker Compose
and shell scripts are fine for quick local work but they have no memory.
Terraform remembers what it built which means it can update it safely and
destroy it completely without you having to track anything manually.
Dependency ordering matters more than you think. If you do not declare
dependencies explicitly with depends_on, Terraform may try to create
resources in parallel before their dependencies exist. The error messages
when this happens are confusing because the resource technically exists,
it just is not ready yet.
Running it yourself
Clone the repo, copy the example vars file, fill in your own values,
then run terraform init followed by terraform apply. Everything provisions
automatically. When you are done, terraform destroy removes it all.
You will need Docker Desktop with Kubernetes enabled, Terraform installed,
and Helm with the Prometheus community repo added. The README in the repo
covers all the steps in detail.
Source code: https://github.com/aftabkh4n/terraform-idp
If you have questions or want to talk through any of the decisions, drop
a comment below.
Top comments (0)