Series: From "Just Put It on a Server" to Production DevOps
Reading time: 15 minutes
Level: Intermediate
The YAML Duplication Problem
In Part 6, we deployed our SSPP platform to Kubernetes. It works! But look at your k8s/ directory:
k8s/
├── api-deployment.yaml
├── api-service.yaml
├── api-configmap.yaml
├── worker-deployment.yaml
├── worker-configmap.yaml
├── redis-deployment.yaml
├── redis-service.yaml
├── postgres-statefulset.yaml
├── postgres-service.yaml
└── ...
Now your manager says:
"We need dev, staging, and prod environments."
Your first thought: Copy-paste all YAML files three times:
k8s/
├── dev/
│ ├── api-deployment.yaml # replicas: 1, resources: small
│ ├── api-service.yaml
│ └── ...
├── staging/
│ ├── api-deployment.yaml # replicas: 2, resources: medium
│ ├── api-service.yaml
│ └── ...
└── prod/
├── api-deployment.yaml # replicas: 5, resources: large
├── api-service.yaml
└── ...
What changes between environments?
- Replica counts
- Resource limits
- Image tags
- Database URLs
- Domain names
- Storage sizes
What stays the same?
- Container ports
- Health check paths
- Service types
- Label selectors
- Volume mount paths
You're copying 80% identical YAML and changing 20%.
Then a bug is found: API health check path should be /health not /healthz.
Now you need to update it in 3 places. And you miss one. Staging is broken. Users are angry.
This is the YAML duplication problem.
What is Helm?
Helm is a package manager for Kubernetes applications.
Beginner mental model:
Helm is like apt-get (Linux) or Homebrew (Mac) for Kubernetes apps.
Instead of managing 20+ YAML files per environment, you create a Helm Chart—a template with variables:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicas }}
template:
spec:
containers:
- name: api
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
resources:
limits:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
Then you have different values files per environment:
# values-dev.yaml
name: sspp-api
replicas: 1
image:
repository: davidbrown77/sspp-api
tag: dev-latest
resources:
cpu: "500m"
memory: "512Mi"
# values-prod.yaml
name: sspp-api
replicas: 5
image:
repository: davidbrown77/sspp-api
tag: v1.2.3
resources:
cpu: "2000m"
memory: "4Gi"
Deploy with:
# Dev
helm install sspp-api ./charts/api -f values-dev.yaml
# Prod
helm install sspp-api ./charts/api -f values-prod.yaml
Same template, different values. DRY (Don't Repeat Yourself) for Kubernetes.
Helm Concepts
Charts
A Helm Chart is a package of Kubernetes manifests.
Structure:
charts/api/
├── Chart.yaml # Metadata (name, version)
├── values.yaml # Default values
├── templates/ # Templated Kubernetes manifests
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── ingress.yaml
└── charts/ # Dependencies (sub-charts)
Values
values.yaml defines default configuration:
replicaCount: 3
image:
repository: davidbrown77/sspp-api
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
Override with:
helm install sspp-api ./charts/api \
--set replicaCount=5 \
--set image.tag=v1.2.3
Or use a values file:
helm install sspp-api ./charts/api -f prod-values.yaml
Templates
Templates use Go templating syntax:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api.fullname" . }}
labels:
{{- include "api.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "api.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "api.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- name: http
containerPort: {{ .Values.service.port }}
Template functions:
-
{{ .Values.replicaCount }}- Access values -
{{ include "helper" . }}- Reuse templates -
{{- if .Values.enabled }}- Conditionals -
{{- range .Values.items }}- Loops
Creating a Helm Chart for SSPP API
Initialize Chart
cd infrastructure
helm create charts/api
This generates a basic chart structure.
Define Chart.yaml
apiVersion: v2
name: sspp-api
description: Sales Signal Processing Platform API
type: application
version: 1.0.0
appVersion: "1.0.0"
Create Templates
templates/deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- name: api
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
value: {{ .Values.database.url }}
- name: REDIS_URL
value: {{ .Values.redis.url }}
{{- if .Values.env }}
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
resources:
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
requests:
cpu: {{ .Values.resources.requests.cpu }}
memory: {{ .Values.resources.requests.memory }}
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
templates/service.yaml:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3000
protocol: TCP
name: http
selector:
app: {{ .Values.name }}
Define Values
values.yaml:
name: sspp-api
replicaCount: 3
image:
repository: davidbrown77/sspp-api
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
database:
url: "postgresql://user:pass@postgres:5432/sspp"
redis:
url: "redis://redis:6379"
resources:
limits:
cpu: "1000m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
env: {}
values-dev.yaml:
name: sspp-api
replicaCount: 1
image:
tag: dev-latest
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "250m"
memory: "256Mi"
values-prod.yaml:
name: sspp-api
replicaCount: 5
image:
tag: v1.2.3
pullPolicy: Always
resources:
limits:
cpu: "2000m"
memory: "4Gi"
requests:
cpu: "1000m"
memory: "2Gi"
env:
LOG_LEVEL: "info"
NODE_ENV: "production"
Deploying with Helm
Install
# Dev environment
helm install sspp-api ./charts/api -f values-dev.yaml
# Prod environment (different namespace)
helm install sspp-api ./charts/api \
-f values-prod.yaml \
-n production \
--create-namespace
Upgrade
# Update image tag
helm upgrade sspp-api ./charts/api \
--set image.tag=v1.3.0 \
--reuse-values
Rollback
# Rollback to previous release
helm rollback sspp-api
# Rollback to specific revision
helm rollback sspp-api 3
List Releases
helm list
helm list -n production
Get Values
# See computed values
helm get values sspp-api
# See all values (including defaults)
helm get values sspp-api --all
Helm for Worker Service
Create a similar chart for the worker:
helm create charts/worker
Key differences from API:
- No Service (worker doesn't expose HTTP)
- Different environment variables
- Different resource requirements
values.yaml:
name: sspp-worker
replicaCount: 2
image:
repository: davidbrown77/sspp-worker
tag: latest
redis:
url: "redis://redis:6379"
database:
url: "postgresql://user:pass@postgres:5432/sspp"
resources:
limits:
cpu: "1000m"
memory: "2Gi"
requests:
cpu: "500m"
memory: "1Gi"
Benefits of Helm
✅ DRY Principle:
- One template, multiple environments
- Change once, deploy everywhere
✅ Version Control:
- Track changes to chart versions
- Rollback to any previous release
✅ Parameterization:
- Override any value at deployment
- No hardcoded configuration
✅ Package Management:
- Share charts via Helm repositories
- Reuse community charts (PostgreSQL, Redis, etc.)
✅ Release Management:
- Track deployment history
- Easy rollbacks
What's Next?
Helm solved packaging and templating, but we still have deployment problems:
❌ Manual deployments - Someone must run helm upgrade
❌ No Git sync - Cluster state can drift from Git
❌ No automation - Still need CI/CD triggers
❌ Configuration drift - Manual kubectl changes go untracked
In Part 9, we'll add GitOps with ArgoCD.
You'll learn:
- Git as single source of truth (not your laptop)
- Automatic sync from Git → Cluster
- Self-healing (ArgoCD reverts manual changes)
- One-click rollbacks through UI
- Deployment history and audit trails
- Progressive delivery (blue/green, canary)
Spoiler: Push to Git → ArgoCD deploys automatically. This is GitOps.
Try It Yourself
Create Helm charts for all SSPP services:
- API chart - Deployment + Service + Ingress
- Worker chart - Deployment only
- Redis chart - Or use bitnami/redis from Helm Hub
- PostgreSQL chart - Or use bitnami/postgresql
Test multi-environment setup:
# Deploy dev
helm install sspp ./charts/api -f values-dev.yaml -n dev --create-namespace
# Deploy prod
helm install sspp ./charts/api -f values-prod.yaml -n production --create-namespace
# Compare
kubectl get pods -n dev
kubectl get pods -n production
Bonus: Package your chart and push to a Helm repository (GitHub Pages, ChartMuseum, or OCI registry).
Next: Automating Deployments
In Part 9, we'll solve the manual deployment problem:
"How do we automatically deploy when code is merged to Git?"
We'll use Argo CD to implement GitOps:
- Git becomes the source of truth
- Changes are automatically synced to cluster
- Drift is detected and corrected
- Deployments are auditable and traceable
But spoiler: GitOps solves deployment automation, but not the full picture. We'll still need observability and production hardening.
Previous: Part 7: Terraform - Making Infrastructure Repeatable
Next: Part 9: GitOps with Argo CD
About the Author
Building this series to demonstrate real DevOps thinking for my Proton.ai application. If you're hiring for platform engineering roles, let's connect.
- GitHub: @daviesbrown
- LinkedIn: David Nwosu
Top comments (0)