DEV Community

Cover image for Part 6: Kubernetes from First Principles (No Magic)
David Nwosu
David Nwosu

Posted on • Edited on

Part 6: Kubernetes from First Principles (No Magic)

Series: From "Just Put It on a Server" to Production DevOps

Reading time: 15 minutes

Level: Intermediate


The Kubernetes Mindset Shift

In Part 5, we broke Docker Compose in creative ways and felt the pain of manual orchestration. Now it's time to fix it.

But first, we need a mindset shift.

Before Kubernetes:

"I have a server. I'll SSH in and run containers on it."

With Kubernetes:

"I have a cluster. I'll declare what I want running. Kubernetes makes it happen."

This is the key insight: Kubernetes is declarative, not imperative.

You don't tell it how to do things. You tell it what you want, and it figures out the how.

# You write this
apiVersion: apps/v1
kind: Deployment
spec:
  replicas: 5

# Kubernetes does this:
# ✓ Schedules 5 pods across nodes
# ✓ Monitors them continuously
# ✓ Restarts if they crash
# ✓ Redistributes if a node dies
# ✓ Rolls out updates gracefully
Enter fullscreen mode Exit fullscreen mode

You describe desired state. Kubernetes maintains it.


Core Concepts (No Jargon)

Pods: The Smallest Unit

A Pod is one or more containers that run together on the same machine.

apiVersion: v1
kind: Pod
metadata:
  name: my-api
spec:
  containers:
  - name: api
    image: davidbrown77/sspp-api:latest
    ports:
    - containerPort: 3000
Enter fullscreen mode Exit fullscreen mode

Think of a Pod as: A wrapper around containers that share:

  • Network (same IP address)
  • Storage (can mount same volumes)
  • Lifecycle (started/stopped together)

Why not just use containers directly? Because you need more control:

  • Sidecars: Add a logging container next to your app
  • Init containers: Run setup before main container starts
  • Shared volumes: Multiple containers access same files

Real-world analogy: A Pod is like a house. Containers are the rooms. They share the address (IP), utilities (network), and exist together.

Deployments: Managing Replicas

You almost never create Pods directly. You create a Deployment, which manages Pods for you.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: davidbrown77/sspp-api:latest
Enter fullscreen mode Exit fullscreen mode

What this creates:

  • 3 identical Pods running your API
  • ReplicaSet managing them
  • Automatic recreation if Pods die
  • Rolling update strategy built-in

The magic: Kill a Pod, Kubernetes immediately creates a new one. Always maintains 3 replicas.

Services: Stable Networking

Pods are ephemeral—they die and get recreated with new IP addresses.

Problem: How do other Pods find your API if its IP keeps changing?

Solution: A Service provides a stable DNS name and IP.

apiVersion: v1
kind: Service
metadata:
  name: api
spec:
  selector:
    app: api
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Creates a stable DNS name: api.sspp-dev.svc.cluster.local
  • Load balances requests across all Pods with label app: api
  • Exposes externally (type: LoadBalancer) via cloud provider's load balancer

Types of Services:

  • ClusterIP: Internal only (default)
  • NodePort: Exposes on each node's IP at a static port
  • LoadBalancer: Creates external load balancer (Linode NodeBalancer)

ConfigMaps & Secrets: Configuration Management

Problem: Hardcoding environment variables in Deployment YAML is bad:

  • ❌ Can't reuse across environments (dev, staging, prod)
  • ❌ Secrets visible in YAML files
  • ❌ Config changes require redeploying entire app

Solution: Separate configuration from application code.

ConfigMaps: Non-Sensitive Configuration

A ConfigMap stores configuration data as key-value pairs.

apiVersion: v1
kind: ConfigMap
metadata:
  name: sspp-config
  namespace: sspp-dev
data:
  # Simple values
  NODE_ENV: "production"
  REDIS_PORT: "6379"
  ELASTICSEARCH_URL: "http://elasticsearch:9200"
  QUEUE_NAME: "sales-events"

  # Multi-line config files
  app.conf: |
    log_level=info
    max_connections=100
    timeout=30s
Enter fullscreen mode Exit fullscreen mode

Use in Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  template:
    spec:
      containers:
      - name: api
        image: davidbrown77/sspp-api:latest
        envFrom:
        - configMapRef:
            name: sspp-config  # Import ALL keys as env vars
        # Or selectively:
        env:
        - name: NODE_ENV
          valueFrom:
            configMapKeyRef:
              name: sspp-config
              key: NODE_ENV
Enter fullscreen mode Exit fullscreen mode

Secrets: Sensitive Data

A Secret stores sensitive data (passwords, tokens, keys) with base64 encoding.

Create from command line:

# From literal values
kubectl create secret generic sspp-secrets \
  --from-literal=DB_PASSWORD=sspp_password \
  --from-literal=REDIS_PASSWORD=redis_secret \
  -n sspp-dev

# From files
kubectl create secret generic api-tls \
  --from-file=tls.crt=./cert.pem \
  --from-file=tls.key=./key.pem \
  -n sspp-dev
Enter fullscreen mode Exit fullscreen mode

Or define in YAML (base64-encode first):

# Encode secrets
echo -n 'sspp_password' | base64
# Output: c3NwcF9wYXNzd29yZA==
Enter fullscreen mode Exit fullscreen mode
apiVersion: v1
kind: Secret
metadata:
  name: sspp-secrets
  namespace: sspp-dev
type: Opaque
data:
  DB_PASSWORD: c3NwcF9wYXNzd29yZA==  # base64 encoded
  REDIS_PASSWORD: cmVkaXNfc2VjcmV0==
Enter fullscreen mode Exit fullscreen mode

Use in Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  template:
    spec:
      containers:
      - name: api
        image: davidbrown77/sspp-api:latest
        env:
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: sspp-secrets
              key: DB_PASSWORD
        # Or mount as files:
        volumeMounts:
        - name: secret-volume
          mountPath: /etc/secrets
          readOnly: true
      volumes:
      - name: secret-volume
        secret:
          secretName: sspp-secrets
Enter fullscreen mode Exit fullscreen mode

Access mounted secret:

// In your app
const password = fs.readFileSync('/etc/secrets/DB_PASSWORD', 'utf8');
Enter fullscreen mode Exit fullscreen mode

Key differences:

Feature ConfigMap Secret
Use case Non-sensitive config Passwords, tokens, keys
Encoding Plain text Base64
Storage etcd (plain) etcd (can be encrypted)
Size limit 1MB 1MB
Best for URLs, ports, flags Credentials, certificates

Setting Up Kubernetes on Linode

We'll use Linode Kubernetes Engine (LKE) for production-grade Kubernetes without the setup pain.

Create a Cluster

  1. Log into Linode Cloud Manager
  2. Click "Kubernetes" → "Create Cluster"
  3. Configure:
    • Cluster Label: sspp-cluster
    • Region: Choose closest to you
    • Kubernetes Version: 1.28 (or latest stable)
    • Node Pools:
      • Pool: standard-4gb (2 CPU, 4GB RAM)
      • Count: 3 nodes
  4. Click "Create Cluster"

Wait 5-10 minutes for cluster provisioning.

Configure kubectl

Download your kubeconfig:

# Install kubectl (if not already installed)
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# Download kubeconfig from Linode dashboard
# Or use Linode CLI:
linode-cli lke kubeconfig-view <cluster-id> > ~/.kube/config

# Verify connection
kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                        STATUS   ROLES    AGE   VERSION
lke12345-67890-abcdef1234   Ready    <none>   5m    v1.28.0
lke12345-67890-abcdef5678   Ready    <none>   5m    v1.28.0
lke12345-67890-abcdef9012   Ready    <none>   5m    v1.28.0
Enter fullscreen mode Exit fullscreen mode

You now have a 3-node Kubernetes cluster. 🎉


Deploying SSPP to Kubernetes

Let's deploy our entire stack: PostgreSQL, Redis, Elasticsearch, API, and Worker.

Step 1: Create Namespace

Namespaces isolate resources (dev, staging, prod).

kubectl create namespace sspp-dev

# Set as default
kubectl config set-context --current --namespace=sspp-dev
Enter fullscreen mode Exit fullscreen mode

Step 2: Deploy PostgreSQL

# infrastructure/k8s/postgres.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  namespace: sspp-dev
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: linode-block-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: sspp-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15-alpine
        env:
        - name: POSTGRES_DB
          value: sales_signals
        - name: POSTGRES_USER
          value: sspp_user
        - name: POSTGRES_PASSWORD
          value: sspp_password
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: sspp-dev
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

Key points:

  • PersistentVolumeClaim: Requests 10GB storage (data survives Pod restarts)
  • Deployment: 1 replica (databases don't horizontally scale easily)
  • Service: ClusterIP (internal only, not exposed externally)

Deploy it:

kubectl apply -f infrastructure/k8s/postgres.yaml

# Check status
kubectl get pods
kubectl get pvc
kubectl get svc
Enter fullscreen mode Exit fullscreen mode

Step 3: Deploy Redis

# infrastructure/k8s/redis.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: sspp-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        ports:
        - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: sspp-dev
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

Deploy:

kubectl apply -f infrastructure/k8s/redis.yaml
Enter fullscreen mode Exit fullscreen mode

Step 4: Deploy Elasticsearch

# infrastructure/k8s/elasticsearch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: sspp-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
        env:
        - name: discovery.type
          value: single-node
        - name: xpack.security.enabled
          value: "false"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
        ports:
        - containerPort: 9200
        - containerPort: 9300
---
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: sspp-dev
spec:
  selector:
    app: elasticsearch
  ports:
  - name: http
    port: 9200
    targetPort: 9200
  - name: transport
    port: 9300
    targetPort: 9300
  type: ClusterIP
Enter fullscreen mode Exit fullscreen mode

Deploy:

kubectl apply -f infrastructure/k8s/elasticsearch.yaml
Enter fullscreen mode Exit fullscreen mode

Step 5: Deploy API Service

# infrastructure/k8s/api.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: sspp-dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: davidbrown77/sspp-api:latest
        ports:
        - containerPort: 3000
        env:
        - name: NODE_ENV
          value: production
        - name: PORT
          value: "3000"
        - name: DB_HOST
          value: postgres
        - name: DB_PORT
          value: "5432"
        - name: DB_NAME
          value: sales_signals
        - name: DB_USER
          value: sspp_user
        - name: DB_PASSWORD
          value: sspp_password
        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /api/v1/health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/v1/health
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: sspp-dev
spec:
  selector:
    app: api
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Key features:

  • 2 replicas: Load balanced automatically
  • Resource limits: Prevents resource hogging
  • Health checks: Kubernetes won't route traffic to unhealthy Pods
  • LoadBalancer: Exposes API publicly via Linode NodeBalancer

Deploy:

kubectl apply -f infrastructure/k8s/api.yaml

# Get external IP (takes 1-2 minutes)
kubectl get svc api
Enter fullscreen mode Exit fullscreen mode

Output:

NAME   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
api    LoadBalancer   10.128.45.123   45.79.123.45     80:30123/TCP   2m
Enter fullscreen mode Exit fullscreen mode

Your API is now publicly accessible at http://45.79.123.45!

Step 6: Deploy Worker Service

# infrastructure/k8s/worker.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: worker
  namespace: sspp-dev
spec:
  replicas: 3
  selector:
    matchLabels:
      app: worker
  template:
    metadata:
      labels:
        app: worker
    spec:
      containers:
      - name: worker
        image: davidbrown77/sspp-worker:latest
        env:
        - name: NODE_ENV
          value: production
        - name: DB_HOST
          value: postgres
        - name: DB_PORT
          value: "5432"
        - name: DB_NAME
          value: sales_signals
        - name: DB_USER
          value: sspp_user
        - name: DB_PASSWORD
          value: sspp_password
        - name: REDIS_HOST
          value: redis
        - name: REDIS_PORT
          value: "6379"
        - name: ELASTICSEARCH_URL
          value: http://elasticsearch:9200
        - name: QUEUE_NAME
          value: sales-events
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "500m"
Enter fullscreen mode Exit fullscreen mode

Deploy:

kubectl apply -f infrastructure/k8s/worker.yaml
Enter fullscreen mode Exit fullscreen mode

Managing Multiple Environments

Real production setup: You need dev, staging, and production environments.

Strategy 1: Namespaces (Simple)

Separate by namespace:

# Create namespaces
kubectl create namespace sspp-dev
kubectl create namespace sspp-staging
kubectl create namespace sspp-prod
Enter fullscreen mode Exit fullscreen mode

Deploy to each:

# Dev
kubectl apply -f infrastructure/k8s/ -n sspp-dev

# Staging
kubectl apply -f infrastructure/k8s/ -n sspp-staging

# Production
kubectl apply -f infrastructure/k8s/ -n sspp-prod
Enter fullscreen mode Exit fullscreen mode

Create environment-specific ConfigMaps:

# config-dev.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: sspp-config
  namespace: sspp-dev
data:
  NODE_ENV: "development"
  LOG_LEVEL: "debug"
  REDIS_HOST: "redis.sspp-dev"
---
# config-prod.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: sspp-config
  namespace: sspp-prod
data:
  NODE_ENV: "production"
  LOG_LEVEL: "info"
  REDIS_HOST: "redis.sspp-prod"
Enter fullscreen mode Exit fullscreen mode

Apply:

kubectl apply -f config-dev.yaml
kubectl apply -f config-prod.yaml
Enter fullscreen mode Exit fullscreen mode

Strategy 2: Kustomize (Better)

Kustomize is built into kubectl and lets you customize YAML without templates.

Directory structure:

infrastructure/k8s/
├── base/
│   ├── kustomization.yaml
│   ├── deployment.yaml
│   ├── service.yaml
│   └── configmap.yaml
├── overlays/
│   ├── dev/
│   │   ├── kustomization.yaml
│   │   └── patch-replicas.yaml
│   ├── staging/
│   │   └── kustomization.yaml
│   └── prod/
│       ├── kustomization.yaml
│       └── patch-replicas.yaml
Enter fullscreen mode Exit fullscreen mode

base/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml
- configmap.yaml

commonLabels:
  app: sspp-api
Enter fullscreen mode Exit fullscreen mode

overlays/dev/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: sspp-dev

resources:
- ../../base

patchesStrategicMerge:
- patch-replicas.yaml

configMapGenerator:
- name: sspp-config
  literals:
  - NODE_ENV=development
  - LOG_LEVEL=debug
Enter fullscreen mode Exit fullscreen mode

overlays/dev/patch-replicas.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 1  # Dev only needs 1 replica
Enter fullscreen mode Exit fullscreen mode

overlays/prod/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: sspp-prod

resources:
- ../../base

patchesStrategicMerge:
- patch-replicas.yaml

configMapGenerator:
- name: sspp-config
  literals:
  - NODE_ENV=production
  - LOG_LEVEL=info

secretGenerator:
- name: sspp-secrets
  literals:
  - DB_PASSWORD=actual_prod_password
Enter fullscreen mode Exit fullscreen mode

overlays/prod/patch-replicas.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
spec:
  replicas: 5  # Production needs 5 replicas
Enter fullscreen mode Exit fullscreen mode

Deploy with Kustomize:

# Dev
kubectl apply -k infrastructure/k8s/overlays/dev

# Staging
kubectl apply -k infrastructure/k8s/overlays/staging

# Production
kubectl apply -k infrastructure/k8s/overlays/prod
Enter fullscreen mode Exit fullscreen mode

Benefits:

  • ✅ DRY (Don't Repeat Yourself) - base shared across environments
  • ✅ Environment-specific overrides
  • ✅ No templating language to learn
  • ✅ Native kubectl support

Strategy 3: Separate Clusters (Production-Grade)

For serious production:

┌─────────────────┐
│  Dev Cluster    │  - 1-2 nodes, shared resources
│  sspp-dev       │  - Developers can break things
└─────────────────┘

┌─────────────────┐
│ Staging Cluster │  - Mirrors production size
│  sspp-staging   │  - Pre-release testing
└─────────────────┘

┌─────────────────┐
│  Prod Cluster   │  - High availability (3+ nodes)
│  sspp-prod      │  - Strict access control
└─────────────────┘
Enter fullscreen mode Exit fullscreen mode

Why separate clusters?

  • 🔒 Security: Compromised dev can't affect prod
  • 💰 Cost: Dev uses cheaper spot instances
  • 🎯 Blast radius: Experiments stay contained
  • 📊 Resource isolation: Dev load doesn't impact prod

Deploy to multiple clusters:

# Switch context
kubectl config use-context lke-dev
kubectl apply -k overlays/dev

kubectl config use-context lke-staging  
kubectl apply -k overlays/staging

kubectl config use-context lke-prod
kubectl apply -k overlays/prod
Enter fullscreen mode Exit fullscreen mode

Best Practices for Environment Management

Never hardcode secrets in YAML

# Bad
env:
- name: DB_PASSWORD
  value: "password123"  # ❌ Visible in Git

# Good
env:
- name: DB_PASSWORD
  valueFrom:
    secretKeyRef:
      name: db-secrets
      key: password  # ✅ Stored separately
Enter fullscreen mode Exit fullscreen mode

Use external secret managers for production

# Use AWS Secrets Manager, Vault, or Sealed Secrets
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: sspp-secrets
spec:
  secretStoreRef:
    name: aws-secrets-manager
  target:
    name: sspp-secrets
  data:
  - secretKey: DB_PASSWORD
    remoteRef:
      key: prod/sspp/db-password
Enter fullscreen mode Exit fullscreen mode

Namespace quotas prevent resource hogging

apiVersion: v1
kind: ResourceQuota
metadata:
  name: dev-quota
  namespace: sspp-dev
spec:
  hard:
    requests.cpu: "4"
    requests.memory: 8Gi
    persistentvolumeclaims: "5"
Enter fullscreen mode Exit fullscreen mode

RBAC for access control

# Developers can deploy to dev, but not prod
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: developers
  namespace: sspp-dev
subjects:
- kind: Group
  name: developers
roleRef:
  kind: ClusterRole
  name: edit  # Can create/update resources
Enter fullscreen mode Exit fullscreen mode

Refactoring API Deployment with ConfigMaps & Secrets

Let's refactor our API deployment to use proper configuration management:

1. Create ConfigMap:

# infrastructure/k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: sspp-config
  namespace: sspp-dev
data:
  NODE_ENV: "production"
  PORT: "3000"
  DB_HOST: "postgres"
  DB_PORT: "5432"
  DB_NAME: "sales_signals"
  REDIS_HOST: "redis"
  REDIS_PORT: "6379"
  ELASTICSEARCH_URL: "http://elasticsearch:9200"
Enter fullscreen mode Exit fullscreen mode

2. Create Secret:

kubectl create secret generic sspp-secrets \
  --from-literal=DB_USER=sspp_user \
  --from-literal=DB_PASSWORD=sspp_password \
  -n sspp-dev
Enter fullscreen mode Exit fullscreen mode

3. Update API Deployment:

# infrastructure/k8s/api.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: sspp-dev
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: davidbrown77/sspp-api:latest
        ports:
        - containerPort: 3000
        envFrom:
        - configMapRef:
            name: sspp-config  # Import all non-sensitive config
        env:
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: sspp-secrets
              key: DB_USER
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: sspp-secrets
              key: DB_PASSWORD
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "500m"
        livenessProbe:
          httpGet:
            path: /api/v1/health
            port: 3000
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /api/v1/health
            port: 3000
          initialDelaySeconds: 10
          periodSeconds: 5
Enter fullscreen mode Exit fullscreen mode

4. Deploy:

kubectl apply -f infrastructure/k8s/configmap.yaml
kubectl apply -f infrastructure/k8s/api.yaml

# Verify
kubectl get configmap sspp-config -n sspp-dev -o yaml
kubectl get secret sspp-secrets -n sspp-dev
Enter fullscreen mode Exit fullscreen mode

Benefits of this approach:

  • ✅ Configuration separate from code
  • ✅ Easy to update config without redeploying image
  • ✅ Secrets not visible in YAML files
  • ✅ Same deployment YAML works across environments (just swap ConfigMap/Secret)

Testing the Deployment

Check Everything is Running

kubectl get pods

NAME                             READY   STATUS    RESTARTS   AGE
api-7d8f9c6b5-abc12              1/1     Running   0          5m
api-7d8f9c6b5-def34              1/1     Running   0          5m
postgres-6c8d7f5b4-ghi56         1/1     Running   0          8m
redis-5b7c6d4a3-jkl78            1/1     Running   0          7m
elasticsearch-4a6b5c3d2-mno90    1/1     Running   0          7m
worker-3c5d4e2f1-pqr12           1/1     Running   0          4m
worker-3c5d4e2f1-stu34           1/1     Running   0          4m
worker-3c5d4e2f1-vwx56           1/1     Running   0          4m
Enter fullscreen mode Exit fullscreen mode

Perfect! All Pods are Running.

Send an Event

# Get external IP
EXTERNAL_IP=$(kubectl get svc api -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

# Send event
curl -X POST http://$EXTERNAL_IP/api/v1/events \
  -H "Content-Type: application/json" \
  -d '{
    "accountId": "acct_k8s_001",
    "userId": "user_k8s_001",
    "eventType": "email_sent",
    "timestamp": "2025-12-22T15:00:00Z",
    "metadata": {
      "campaign": "Kubernetes_Launch"
    }
  }'
Enter fullscreen mode Exit fullscreen mode

Response:

{
  "status": "accepted",
  "jobId": "1",
  "message": "Event queued for processing"
}
Enter fullscreen mode Exit fullscreen mode

Verify Processing

# Check worker logs
kubectl logs -l app=worker --tail=20

# Expected output:
# info: Processing job 1 {"accountId":"acct_k8s_001",...}
# info: Signal stored in PostgreSQL {"signalId":1}
# info: Signal indexed in Elasticsearch {"signalId":1}
# info: Job 1 completed successfully
Enter fullscreen mode Exit fullscreen mode

End-to-end flow working! 🎊


Kubernetes Superpowers

Now let's see what Kubernetes can do that Docker Compose can't.

1. Self-Healing

Kill a Pod:

# List pods
kubectl get pods

# Delete one
kubectl delete pod api-7d8f9c6b5-abc12

# Check again immediately
kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                    READY   STATUS              RESTARTS   AGE
api-7d8f9c6b5-abc12     1/1     Terminating         0          10m
api-7d8f9c6b5-xyz99     0/1     ContainerCreating   0          1s
api-7d8f9c6b5-def34     1/1     Running             0          10m
Enter fullscreen mode Exit fullscreen mode

Kubernetes immediately created a replacement! No manual intervention.

2. Horizontal Scaling

Scale up:

kubectl scale deployment api --replicas=5

# Check
kubectl get pods -l app=api
Enter fullscreen mode Exit fullscreen mode

Output:

NAME                    READY   STATUS    RESTARTS   AGE
api-7d8f9c6b5-abc12     1/1     Running   0          1m
api-7d8f9c6b5-def34     1/1     Running   0          1m
api-7d8f9c6b5-ghi56     1/1     Running   0          1m
api-7d8f9c6b5-jkl78     1/1     Running   0          10s
api-7d8f9c6b5-mno90     1/1     Running   0          10s
Enter fullscreen mode Exit fullscreen mode

5 API instances in 10 seconds! Try doing that with Docker Compose.

Scale down:

kubectl scale deployment api --replicas=2
Enter fullscreen mode Exit fullscreen mode

Kubernetes gracefully terminates 3 Pods.

3. Rolling Updates (Zero Downtime)

Update the image:

kubectl set image deployment/api api=davidbrown77/sspp-api:v2.0.0

# Watch the rollout
kubectl rollout status deployment/api
Enter fullscreen mode Exit fullscreen mode

What happens:

  1. Kubernetes creates 1 new Pod with v2.0.0
  2. Waits for it to be healthy (readiness probe)
  3. Terminates 1 old Pod
  4. Repeats until all Pods are updated

No downtime. Traffic always routes to healthy Pods.

4. Rollback

New version has a bug? Rollback instantly:

kubectl rollout undo deployment/api

# Or rollback to specific revision
kubectl rollout undo deployment/api --to-revision=2
Enter fullscreen mode Exit fullscreen mode

Rollback time: 30-60 seconds.

Compare to manual deployment: 5-10 minutes (SSH, git pull, rebuild, restart, pray).

5. Auto-Scaling (HPA)

Create a Horizontal Pod Autoscaler:

kubectl autoscale deployment api \
  --cpu-percent=70 \
  --min=2 \
  --max=10
Enter fullscreen mode Exit fullscreen mode

What this does:

  • Monitors CPU usage
  • If average CPU > 70%, scale up
  • If CPU < 70%, scale down
  • Min 2 Pods, max 10 Pods

Load test it:

# Generate traffic
for i in {1..1000}; do
  curl -X POST http://$EXTERNAL_IP/api/v1/events \
    -H "Content-Type: application/json" \
    -d '{"accountId":"load_test","userId":"user_'$i'","eventType":"email_sent","timestamp":"2025-12-22T15:00:00Z","metadata":{}}' &
done

# Watch scaling
kubectl get hpa -w
Enter fullscreen mode Exit fullscreen mode

Output:

NAME   REFERENCE        TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
api    Deployment/api   85%/70%   2         10        5          2m
Enter fullscreen mode Exit fullscreen mode

Kubernetes scaled to 5 Pods automatically!


Essential kubectl Commands

Pods

# List pods
kubectl get pods

# Describe pod (detailed info)
kubectl describe pod <pod-name>

# Logs
kubectl logs <pod-name>
kubectl logs -f <pod-name>  # Follow

# Execute command
kubectl exec -it <pod-name> -- sh

# Delete pod
kubectl delete pod <pod-name>
Enter fullscreen mode Exit fullscreen mode

Deployments

# List deployments
kubectl get deployments

# Describe deployment
kubectl describe deployment <name>

# Scale
kubectl scale deployment <name> --replicas=5

# Update image
kubectl set image deployment/<name> <container>=<image>

# Rollout status
kubectl rollout status deployment/<name>

# Rollback
kubectl rollout undo deployment/<name>

# Delete deployment
kubectl delete deployment <name>
Enter fullscreen mode Exit fullscreen mode

Services

# List services
kubectl get svc

# Describe service
kubectl describe svc <name>

# Delete service
kubectl delete svc <name>
Enter fullscreen mode Exit fullscreen mode

Everything

# List all resources
kubectl get all

# Delete everything in namespace
kubectl delete all --all
Enter fullscreen mode Exit fullscreen mode

What We Solved

Self-healing - Pods restart automatically

Multi-server - Cluster of nodes, not single server

Load balancing - Built-in Service resource

Horizontal scaling - kubectl scale or HPA

Rolling updates - Zero-downtime deployments

Rollback - One-command revert

Health checks - Traffic only to healthy Pods

Declarative - YAML manifests = desired state


What's Next?

Our Kubernetes deployment works, but we have a critical problem:

Manual infrastructure setup - We created the cluster by clicking buttons

Not reproducible - Can't recreate this infrastructure reliably

No version control - Infrastructure changes aren't tracked

Drift risk - Production and dev environments diverge over time

In Part 7, we'll fix this with Terraform (Infrastructure as Code).

You'll learn:

  • Define infrastructure in code (not clicks)
  • Version control your entire stack
  • Recreate environments identically
  • Manage multiple environments (dev, staging, prod)
  • Treat infrastructure like application code

Spoiler: Delete your entire cluster, run terraform apply, and it rebuilds perfectly.


Try It Yourself

Challenge: Deploy SSPP to Linode Kubernetes Engine:

  1. Create LKE cluster (3 nodes)
  2. Deploy all services (postgres, redis, elasticsearch, api, worker)
  3. Send 100 events
  4. Kill Pods and watch them resurrect
  5. Scale API to 10 replicas
  6. Roll out a new image version

Bonus: Set up HPA and load test it.


Discussion

What was your "aha!" moment with Kubernetes?

Share on GitHub Discussions.


Previous: Part 5: Why Containers Still Fail in Production

Next: Part 7: Terraform - Making Infrastructure Repeatable

About the Author

Building this series for my Proton.ai application to demonstrate real DevOps thinking.

Top comments (0)