Before diving into Terraform, we need to understand the mental shift that Infrastructure as Code (IaC) represents.
From Imperative to Declarative
Imperative Approach (what we did in Chapter 1):
# Step 1: Do this
kubectl create namespace ollama
# Step 2: Now do that
kubectl create secret generic credentials...
# Step 3: Then do this other thing
helm install ollama...
# Like a chef giving instructions: "First heat the oven,
# then mix the ingredients, then bake for 30 minutes"
Declarative Approach (Infrastructure as Code):
# Describe the desired end state
resource "kubernetes_namespace" "ollama" {
metadata {
name = "ollama"
}
}
resource "helm_release" "ollama" {
name = "ollama"
namespace = kubernetes_namespace.ollama.metadata[0].name
# ...
}
# Like a shopping list: "I need flour, eggs, sugar"
# The system figures out HOW to get it
The difference is subtle but profound:
- Imperative: You say how to do it
- Declarative: You say what you want
The Three Pillars of IaC
1. Versioning
git log infrastructure/
# Complete change history
# Who changed what, when, and why
2. Reproducibility
git clone repo
terraform apply
# Identical infrastructure anywhere
3. Auditing
git blame main.tf
# Each line traced to its author
# Pull requests = infrastructure review
Our First Step: Terraform + Kubernetes Provider
Let's start with the most direct approach, using Terraform to manage Kubernetes resources directly.
Project Structure
kubernetes-terraform/
├── main.tf # Main configuration
├── variables.tf # Input variables
├── outputs.tf # Output values
├── terraform.tfvars # Variable values (don't commit!)
└── .gitignore # Ignore secrets and state
Initial Configuration: Provider
# main.tf
terraform {
required_version = ">= 1.0"
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.23"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "minikube"
}
What's happening here?
-
terraform: We declare requirements- Minimum Terraform version
- Providers we'll use and their versions
-
provider "kubernetes": We configure connection-
config_path: Where the kubeconfig is (credentials) -
config_context: Which cluster to use (can have multiple)
-
Terraform will read your ~/.kube/config (same file that kubectl uses) and authenticate to the cluster.
Creating Namespaces: The Simplest
resource "kubernetes_namespace" "ollama" {
metadata {
name = "ollama"
labels = {
managed-by = "terraform"
app = "ollama"
env = "development"
}
}
}
resource "kubernetes_namespace" "librechat" {
metadata {
name = "librechat"
labels = {
managed-by = "terraform"
app = "librechat"
env = "development"
}
}
}
Important concepts:
Resource: Terraform's basic unit
resource "TYPE" "LOCAL_NAME" {
# configuration
}
-
TYPE: Which resource to create (kubernetes_namespace) -
LOCAL_NAME: How to reference it in Terraform code - The real resource will have
metadata.name(in K8s)
Labels: Organizational metadata
-
managed-by = "terraform": Indicates who manages this resource - Useful for filtering:
kubectl get ns -l managed-by=terraform
Managing Secrets: Sensitive Variables
# variables.tf
variable "jwt_secret" {
description = "JWT secret for LibreChat"
type = string
sensitive = true
}
variable "jwt_refresh_secret" {
description = "JWT refresh secret for LibreChat"
type = string
sensitive = true
}
variable "creds_key" {
description = "Credentials encryption key"
type = string
sensitive = true
}
variable "creds_iv" {
description = "Credentials initialization vector"
type = string
sensitive = true
}
# main.tf
resource "kubernetes_secret" "librechat_credentials" {
metadata {
name = "librechat-credentials-env"
namespace = kubernetes_namespace.librechat.metadata[0].name
}
data = {
JWT_SECRET = var.jwt_secret
JWT_REFRESH_SECRET = var.jwt_refresh_secret
CREDS_KEY = var.creds_key
CREDS_IV = var.creds_iv
MONGO_URI = "mongodb://librechat-mongodb:27017/LibreChat"
MEILI_HOST = "http://librechat-meilisearch:7700"
OLLAMA_BASE_URL = "http://ollama.ollama.svc.cluster.local:11434"
}
type = "Opaque"
}
Security patterns:
- Sensitive variables:
sensitive = true
# Terraform won't show values in logs
-
Separate file (
terraform.tfvars):
jwt_secret = "abc123..."
jwt_refresh_secret = "def456..."
creds_key = "ghi789..."
creds_iv = "jkl012..."
CRITICAL: Add to .gitignore!
- Dynamic references:
namespace = kubernetes_namespace.librechat.metadata[0].name
# Terraform creates the namespace FIRST, then uses the name
# Automatic dependency tracking!
ConfigMaps: Versioned Configuration
resource "kubernetes_config_map" "librechat_config" {
metadata {
name = "librechat-config"
namespace = kubernetes_namespace.librechat.metadata[0].name
}
data = {
"librechat.yaml" = <<-EOT
version: 1.1.5
cache: true
endpoints:
custom:
- name: "Ollama"
apiKey: "ollama"
baseURL: "http://ollama.ollama.svc.cluster.local:11434/v1"
models:
default:
- "llama2:latest"
fetch: true
titleConvo: true
titleModel: "llama2:latest"
summarize: false
forcePrompt: false
modelDisplayLabel: "Ollama"
addParams:
temperature: 0.7
max_tokens: 2000
EOT
}
}
Heredoc syntax (<<-EOT ... EOT):
- Allows multi-line strings
- Automatic indentation
- Perfect for YAML inside HCL
Advantage over manual:
# Before:
kubectl create configmap librechat-config --from-file=config.yaml
# Now:
git diff librechat_config.tf
# See exactly what changed in the configuration
The Terraform Workflow
# 1. Initialize (first time)
terraform init
# Downloads providers, prepares backend
# 2. Validate syntax
terraform validate
# Checks if HCL is correct
# 3. Format code
terraform fmt
# Standardizes formatting
# 4. Plan changes
terraform plan
# Preview what will happen
# 5. Apply changes
terraform apply
# Creates/updates resources
# 6. View current state
terraform state list
# Lists all managed resources
What terraform plan shows:
Terraform will perform the following actions:
# kubernetes_namespace.ollama will be created
+ resource "kubernetes_namespace" "ollama" {
+ id = (known after apply)
+ metadata {
+ generation = (known after apply)
+ name = "ollama"
+ labels = {
+ "app" = "ollama"
+ "env" = "development"
+ "managed-by" = "terraform"
}
}
}
# kubernetes_secret.librechat_credentials will be created
+ resource "kubernetes_secret" "librechat_credentials" {
+ data = (sensitive value)
+ id = (known after apply)
+ type = "Opaque"
}
Plan: 2 to add, 0 to change, 0 to destroy.
Key points:
-
+= Will be created -
~= Will be updated -
-= Will be destroyed -
(sensitive value)= Hidden for security -
(known after apply)= Terraform doesn't know yet (will be generated)
Deploying a Complete Application
Here's where things get... verbose.
resource "kubernetes_deployment" "ollama" {
metadata {
name = "ollama"
namespace = kubernetes_namespace.ollama.metadata[0].name
labels = {
app = "ollama"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "ollama"
}
}
template {
metadata {
labels = {
app = "ollama"
}
}
spec {
container {
name = "ollama"
image = "ollama/ollama:latest"
port {
container_port = 11434
}
volume_mount {
name = "ollama-data"
mount_path = "/root/.ollama"
}
resources {
limits = {
"nvidia.com/gpu" = "1"
}
requests = {
cpu = "1"
memory = "4Gi"
}
}
env {
name = "OLLAMA_HOST"
value = "0.0.0.0"
}
}
volume {
name = "ollama-data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim.ollama_data.metadata[0].name
}
}
}
}
}
}
resource "kubernetes_persistent_volume_claim" "ollama_data" {
metadata {
name = "ollama-data"
namespace = kubernetes_namespace.ollama.metadata[0].name
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}
resource "kubernetes_service" "ollama" {
metadata {
name = "ollama"
namespace = kubernetes_namespace.ollama.metadata[0].name
}
spec {
selector = {
app = "ollama"
}
port {
port = 11434
target_port = 11434
}
type = "ClusterIP"
}
}
resource "kubernetes_ingress_v1" "ollama" {
metadata {
name = "ollama-ingress"
namespace = kubernetes_namespace.ollama.metadata[0].name
annotations = {
"nginx.ingress.kubernetes.io/rewrite-target" = "/"
}
}
spec {
ingress_class_name = "nginx"
rule {
host = "ollama.glukas.space"
http {
path {
path = "/"
path_type = "Prefix"
backend {
service {
name = kubernetes_service.ollama.metadata[0].name
port {
number = 11434
}
}
}
}
}
}
}
}
That's 160+ lines for ONE application!
Compare:
- Helm (Chapter 1): 1 command + ~10 lines of values.yaml
- kubectl manual: ~200 lines of YAML
- Terraform + K8s Provider: ~160 lines of HCL
Terraform didn't win much in simplicity here.
The Problems Emerge: Why This Doesn't Scale
Now comes the crucial part—the problems that only appear when you try to use this for real.
Problem 1: State Explosion
terraform apply
# Creates resources...
terraform state list
Output:
kubernetes_namespace.ollama
kubernetes_persistent_volume_claim.ollama_data
kubernetes_deployment.ollama
kubernetes_replica_set.ollama-5d9c8f7b6d # Created automatically
kubernetes_pod.ollama-5d9c8f7b6d-xk2j9 # Created by ReplicaSet
kubernetes_service.ollama
kubernetes_endpoints.ollama # Created by Service
kubernetes_endpoint_slice.ollama-xxxxx # Created by Service
kubernetes_ingress_v1.ollama
The problem:
Kubernetes creates resources automatically:
- Deployment → creates ReplicaSet
- ReplicaSet → creates Pods
- Service → creates Endpoints and EndpointSlices
Terraform tracks all of them in state, even though you didn't declare them explicitly.
Consequences:
- Giant state file
ls -lh terraform.tfstate
# 2.3MB for just 2 applications
- Slow plans
terraform plan
# Needs to query state of hundreds of resources
# Takes 30+ seconds
-
Fragility
- If a Pod dies and gets recreated, state becomes inconsistent
-
terraform refreshtries to sync, but it's expensive
Problem 2: Inevitable Drift
# Deployed with 1 replica
terraform apply
# User scales manually (common in production)
kubectl scale deployment ollama --replicas=3
# Terraform doesn't detect it!
terraform plan
# Output: No changes. Infrastructure is up-to-date.
# But kubectl shows 3 Pods running
kubectl get pods -n ollama
# ollama-xxx-aaa
# ollama-xxx-bbb
# ollama-xxx-ccc
Why does this happen?
Terraform compares:
- State file (snapshot of what was created)
- Code (desired state)
Changes made directly with kubectl aren't reflected in the state file immediately. Only when you run terraform refresh or terraform apply.
In dynamic environments:
- HPAs (Horizontal Pod Autoscalers) change replicas
- Teams do hotfixes via kubectl
- CI/CD pipelines update deployments
Terraform is constantly outdated.
Problem 3: No Natural Rollback
With Helm:
# Current version works
helm list
# ollama 1 deployed
# Deploy new version
helm upgrade ollama ollama-helm/ollama -f new-values.yaml
# ollama 2 deployed
# New version broke!
helm rollback ollama
# ollama 3 deployed (back to rev 1 state)
Helm maintains release history. Rollback is instantaneous.
With Terraform:
# Initial deploy
terraform apply
# Change in code
vim main.tf
terraform apply
# It broke! How to go back?
# Option 1: Git revert
git revert HEAD
terraform apply
# Can take minutes to recreate resources
# Option 2: State manipulation (dangerous)
terraform state rm ...
terraform import ...
# Risky and manual
There's no native concept of "release" or "revision".
Problem 4: Complex Lifecycle
Kubernetes has resources that manage other resources:
- Deployment manages ReplicaSets
- ReplicaSet manages Pods
- Service manages Endpoints
Terraform wasn't designed for this. It expects to manage resources directly, not via controllers.
Practical example:
resource "kubernetes_deployment" "ollama" {
spec {
replicas = 2
}
}
# Terraform creates:
# 1. Deployment
# 2. ReplicaSet (created by Deployment controller)
# 3. Pods (created by ReplicaSet controller)
# If you delete the Deployment:
terraform destroy -target kubernetes_deployment.ollama
# Pods die BEFORE the Deployment is deleted
# Destruction order is hard to control
Problem 5: In-Place Updates vs Replacements
# Simple change: update image
resource "kubernetes_deployment" "ollama" {
spec {
template {
spec {
container {
image = "ollama/ollama:v0.1.21" # was v0.1.20
}
}
}
}
}
terraform plan
Expected output:
~ kubernetes_deployment.ollama will be updated in-place
Actual output:
-/+ kubernetes_deployment.ollama must be replaced
Terraform sometimes decides resources need to be recreated instead of updated, causing unnecessary downtime.
When to Use Each Approach
Comparison Table
| Aspect | kubectl Manual | Terraform + K8s | Helm | Verdict |
|---|---|---|---|---|
| Initial setup | Instant | Configuration | 1 command | Helm wins |
| Versioning | None | Git | Via Git | Terraform/Helm |
| Reproducibility | Low | High | High | Terraform/Helm |
| Rollback | Manual | Via Git | Native | Helm wins |
| State management | N/A | Complex | Simple | Helm wins |
| Drift detection | None | Limited | Good | Helm wins |
| Apply time | Fast | Slow | Fast | kubectl/Helm |
| Learning curve | Medium | High | Medium | kubectl/Helm |
When Terraform + K8s Provider Makes Sense
Use for:
- Base infrastructure resources:
# Namespaces
resource "kubernetes_namespace" "team_apps" { }
# RBAC
resource "kubernetes_role" "developer" { }
resource "kubernetes_role_binding" "dev_binding" { }
# Storage Classes
resource "kubernetes_storage_class" "fast_ssd" { }
-
Resources that rarely change:
- Network Policies
- Resource Quotas
- Limit Ranges
- Priority Classes
Integration with other providers:
# Create AWS infra and configure K8s in one go
resource "aws_eks_cluster" "main" { }
resource "kubernetes_namespace" "app" {
depends_on = [aws_eks_cluster.main]
}
Avoid for:
- Complete applications (use Helm)
- Frequently changing resources (use GitOps)
- Multiple interdependent components (use Helm charts)
- When an official chart already exists (don't reinvent)
Conclusion of Chapter 2: The Middle Ground Exists
Terraform + Kubernetes Provider isn't a bad approach—it's an incomplete approach.
What we learned:
Terraform is excellent for:
- Versioning and auditing
- Guaranteed reproducibility
- Multi-cloud integration
- Base resource management
Terraform is problematic for:
- Complex deployments (too verbose)
- Resources managed by controllers (state explosion)
- Frequently changing applications (drift)
- Rollbacks and lifecycle management
The natural question emerges:
"Is there a way to combine the best of both worlds?
Terraform's versioning + Helm's simplicity?"
Answer: YES!
And that's exactly what we'll explore in Chapter 3.
Next Chapter: A Better Abstraction
In Chapter 3, we'll discover that Terraform can manage Helm releases. Instead of describing each Kubernetes resource manually, we'll treat Helm charts as "deployable units" and use Terraform only to orchestrate.
Continue to:
Chapter 3: Terraform + Helm — The Right Abstraction →
Additional Resources
To dive deeper into Terraform:
- Terraform: Up & Running — The definitive book
- Terraform Best Practices — Community patterns
- HashiCorp Learn — Official tutorials
To better understand Kubernetes:
- Kubernetes Patterns — Design patterns
- Controllers and Operators — How K8s works internally
Top comments (0)