Introduction
Your docker-compose.yml got you from zero to production fast. A single file, one command, and your entire stack was running. But now you're hitting the limits: you need rolling deployments, auto-scaling, health checks that actually restart failed services, and the ability to run across multiple nodes. It's time to move to Kubernetes.
The migration doesn't have to be painful. Most Docker Compose concepts map directly to Kubernetes objects, and tools like Kompose can automate the initial conversion. But there are important differences in how networking, storage, configuration, and service discovery work that you need to understand to avoid debugging mysterious failures.
This guide walks through migrating a real-world Docker Compose application to Kubernetes, covering the conceptual mapping, manual conversion, Kompose automation, Helm charts, and the gotchas that trip up everyone on their first migration.
Concept Mapping: Compose to Kubernetes
Before touching any files, understand how Compose concepts translate:
| Docker Compose | Kubernetes | Notes |
|---|---|---|
services: |
Deployment + Service | One Deployment per service, one Service for networking |
image: |
spec.containers[].image |
Same container images work unchanged |
ports: |
Service (ClusterIP, NodePort, LoadBalancer) | Networking is fundamentally different |
volumes: (named) |
PersistentVolumeClaim (PVC) | Must define storage class |
volumes: (bind mount) |
ConfigMap or HostPath | ConfigMaps for config files; avoid HostPath |
environment: |
env: or ConfigMap/Secret |
Secrets should use K8s Secrets |
depends_on: |
No direct equivalent | Use init containers or readiness probes |
restart: always |
Default behavior | K8s restarts crashed containers automatically |
networks: |
NetworkPolicy (optional) | All pods can talk by default |
deploy.replicas: |
spec.replicas: |
Direct mapping |
The biggest conceptual shift: in Compose, services talk to each other by service name on a Docker network. In Kubernetes, services talk via DNS names like my-service.my-namespace.svc.cluster.local - but the short name my-service works within the same namespace, so most connection strings don't need to change.
Starting Point: A Real Docker Compose Application
Let's migrate a typical web application stack:
# docker-compose.yml
services:
api:
image: myapp/api:1.2.0
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://app:secret@postgres:5432/myapp
REDIS_URL: redis://redis:6379
NODE_ENV: production
JWT_SECRET: my-jwt-secret-key
depends_on:
- postgres
- redis
restart: always
deploy:
replicas: 2
worker:
image: myapp/worker:1.2.0
environment:
DATABASE_URL: postgres://app:secret@postgres:5432/myapp
REDIS_URL: redis://redis:6379
depends_on:
- postgres
- redis
restart: always
postgres:
image: postgres:16
environment:
POSTGRES_DB: myapp
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
restart: always
redis:
image: redis:7-alpine
command: redis-server --appendonly yes
volumes:
- redisdata:/data
restart: always
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./certs:/etc/nginx/certs:ro
depends_on:
- api
restart: always
volumes:
pgdata:
redisdata:
Manual Conversion: The Right Way
Let's convert each service. I'll explain what changes and why.
Secrets first - never put passwords in plain YAML:
# k8s/secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
stringData:
database-url: "postgres://app:secret@postgres:5432/myapp"
jwt-secret: "my-jwt-secret-key"
postgres-password: "secret"
In production, use a secrets manager (AWS Secrets Manager, HashiCorp Vault) with the External Secrets Operator instead of storing secrets in YAML files.
ConfigMap for non-sensitive configuration:
# k8s/configmap.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
NODE_ENV: "production"
REDIS_URL: "redis://redis:6379"
API Deployment and Service:
# k8s/api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
labels:
app: api
spec:
replicas: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: myapp/api:1.2.0
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: app-secrets
key: jwt-secret
envFrom:
- configMapRef:
name: app-config
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 15
periodSeconds: 20
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 3000
targetPort: 3000
type: ClusterIP
Key differences from Compose:
-
Health probes replace
depends_on. Kubernetes checks readiness before sending traffic and restarts containers that fail liveness checks. -
Resource limits prevent a single service from consuming all node resources. Compose has this via
deploy.resourcesbut few teams use it. -
Selector labels (
app: api) link the Deployment to its Service.
PostgreSQL StatefulSet:
# k8s/postgres-statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: "myapp"
- name: POSTGRES_USER
value: "app"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: postgres-password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1Gi"
cpu: "1000m"
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: gp3
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- port: 5432
clusterIP: None # Headless service for StatefulSet
Why StatefulSet instead of Deployment? Databases need stable network identities and persistent storage that survives pod restarts. StatefulSets guarantee this; Deployments don't.
Ingress instead of Nginx:
# k8s/ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/rate-limit-connections: "10"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
secretName: api-tls
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3000
You no longer need your own Nginx container. The Ingress controller (installed cluster-wide) handles TLS termination, routing, and load balancing. cert-manager automatically provisions Let's Encrypt certificates.
Using Kompose for Quick Conversion
Kompose can generate a starting point from your docker-compose.yml:
# Install Kompose
curl -L https://github.com/kubernetes/kompose/releases/download/v1.34.0/kompose-linux-amd64 -o kompose
chmod +x kompose
sudo mv kompose /usr/local/bin/
# Convert docker-compose.yml to K8s manifests
kompose convert
# Or output to a specific directory
kompose convert -o k8s/
# Generate Helm chart instead
kompose convert --chart
Kompose generates working manifests, but they'll need adjustments:
- It won't create StatefulSets for databases (you'll get Deployments with PVCs)
- Health probes won't be configured
- Resource limits won't be set
- Secrets will be plain ConfigMaps
- No Ingress resources will be generated
Use Kompose as a starting point, then manually refine each resource.
Packaging with Helm
Once your manifests work, package them as a Helm chart for easier deployment across environments:
helm create myapp
This creates a chart structure. Replace the template files with your manifests and parameterize environment-specific values:
# myapp/values.yaml
api:
replicas: 2
image:
repository: myapp/api
tag: "1.2.0"
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
worker:
replicas: 1
image:
repository: myapp/worker
tag: "1.2.0"
postgres:
storage: 20Gi
storageClass: gp3
ingress:
enabled: true
host: api.example.com
tls: true
# myapp/templates/api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "myapp.fullname" . }}-api
spec:
replicas: {{ .Values.api.replicas }}
selector:
matchLabels:
app: api
template:
spec:
containers:
- name: api
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag }}"
resources:
{{- toYaml .Values.api.resources | nindent 12 }}
Deploy to different environments by overriding values:
# Staging
helm install myapp ./myapp -f values-staging.yaml -n staging
# Production
helm install myapp ./myapp -f values-production.yaml -n production
# Upgrade
helm upgrade myapp ./myapp -f values-production.yaml -n production
What Changes and What Doesn't
Things that stay the same:
- Your Docker images work unchanged
- Application code doesn't need modification
- Service-to-service communication via DNS names (if using the same service names)
- Environment variables for configuration
Things that change:
-
Networking: No more
ports:mapping to the host. Services communicate internally; Ingress handles external traffic. - Storage: Named volumes become PersistentVolumeClaims backed by cloud storage providers. Bind mounts become ConfigMaps or Secrets.
-
Configuration: Environment files become ConfigMaps and Secrets. The
env_file:directive doesn't exist. -
Startup ordering:
depends_ondoesn't exist. Use init containers to wait for dependencies, or better yet, make your application retry connections on startup. -
Logging:
docker-compose logsbecomeskubectl logs. Consider deploying a log aggregator (Loki, EFK stack). -
Scaling:
docker-compose up --scale api=5becomeskubectl scale deployment api --replicas=5or HorizontalPodAutoscaler for automatic scaling.
Common Migration Pitfalls
1. Forgetting to set resource limits. Without limits, a memory leak in one pod can trigger OOM kills across the entire node. Always set both requests and limits.
2. Using Deployments for databases. Deployments can schedule multiple replicas on different nodes with different PVCs. Your data ends up split across volumes. Use StatefulSets or, better yet, use a managed database service (RDS, Cloud SQL).
3. Hardcoded localhost references. In Compose, services on the same host can sometimes use localhost. In Kubernetes, each pod has its own network namespace. Change localhost to the service name.
4. Not implementing health probes. Without readiness probes, Kubernetes sends traffic to pods that aren't ready. Without liveness probes, crashed containers aren't restarted. Both are critical.
5. Ignoring the init container pattern. If your API needs the database to be ready before starting:
initContainers:
- name: wait-for-postgres
image: busybox:1.36
command: ['sh', '-c', 'until nc -z postgres 5432; do echo waiting for postgres; sleep 2; done']
6. Using :latest tags. In Compose this is common; in Kubernetes it's dangerous. Kubernetes caches images and won't pull a new :latest unless you set imagePullPolicy: Always, which slows down every pod start. Use specific version tags.
Migration Checklist
Before cutting over to Kubernetes:
- [ ] All Docker images are in a registry accessible from the cluster
- [ ] Secrets are stored in K8s Secrets (not plain ConfigMaps or environment variables in YAML)
- [ ] Resource requests and limits are set for every container
- [ ] Readiness and liveness probes are configured
- [ ] PersistentVolumeClaims are created for stateful services
- [ ] Ingress is configured with TLS
- [ ] Application retries database/cache connections on startup
- [ ] Logging is accessible via
kubectl logsor a log aggregator - [ ] CI/CD pipeline deploys to K8s (no manual
kubectl apply) - [ ] Monitoring is in place (Prometheus + Grafana or cloud equivalent)
- [ ] Run in staging for at least one week before production cutover
Need Help with Your DevOps?
Migrating from Docker Compose to Kubernetes is a significant step - and getting the architecture right from the start saves months of rework. At InstaDevOps, we handle Kubernetes migrations, cluster management, CI/CD pipelines, and ongoing infrastructure operations starting at $2,999/month.
Book a free 15-minute consultation to plan your migration: https://calendly.com/instadevops/15min
Top comments (0)