DEV Community

Cover image for I Built a Multi-Service Kubernetes App and Here's What Actually Broke
Adil Khan
Adil Khan

Posted on

I Built a Multi-Service Kubernetes App and Here's What Actually Broke

I Built a Multi-Service Kubernetes App and Here's What Actually Broke

The Goal

This wasn't a "follow the tutorial" project. The goal was simple: understand how real distributed systems actually work inside Kubernetes.

Not just "deploy containers," but truly understand:

  • How services discover each other
  • How internal networking routes traffic
  • How ingress exposes applications externally
  • How TLS termination works at the edge
  • How secrets and configs propagate
  • How rolling updates affect uptime
  • The difference between stateful and stateless workloads
  • How DNS resolution works across namespaces
  • How to debug when things inevitably break

The result is a multi-service voting application that mirrors real production microservices architecture.


The Architecture

Five independent services, each with a specific role:

  1. Voting Frontend - Stateless web UI where users cast votes
  2. Results Frontend - Stateless web UI displaying real-time results
  3. Redis - Message queue for asynchronous processing
  4. PostgreSQL - Persistent database storing vote data
  5. Worker Service - Background processor consuming from queue and writing to database

The traffic flow follows a typical 3-tier distributed pattern:

User → Frontend → Queue → Worker → Database → Results
Enter fullscreen mode Exit fullscreen mode

System Architecture Diagram

Kubernetes Voting App Architecture

Architecture Overview:

The diagram above illustrates the complete system architecture showing:

  • External Layer: User traffic entering via HTTPS (port 8443)
  • Ingress Layer: TLS termination and path-based routing
  • Application Layer: Stateless frontend services (voting and results)
  • Data Layer: Redis (message queue) and PostgreSQL (persistent storage)
  • Processing Layer: Worker service connecting queue to database

Key Design Decisions:

  • All internal communication uses ClusterIP Services
  • External access is controlled through Ingress with host-based routing
  • Services are isolated and communicate only through defined interfaces
  • No hardcoded IPs anywhere - everything uses service discovery

Kubernetes Components Deep Dive

Deployments

Deployments manage the stateless workloads in this system:

  • Voting frontend
  • Results frontend
  • Redis
  • Worker
  • PostgreSQL (initially)

What Deployments Enable:

  • Rolling updates without downtime
  • Declarative replica scaling
  • Self-healing when pods crash
  • Controlled rollout strategies

Every time I updated a deployment, Kubernetes created new pods, waited for them to be ready, then terminated old ones. Zero downtime.

StatefulSets (The Deep Learning)

StatefulSets were explored separately to understand how stateful workloads differ fundamentally from stateless ones.

Key StatefulSet Characteristics:

  • Stable, persistent pod identities (pod-0, pod-1, etc.)
  • Ordered, graceful deployment and scaling
  • Stable network identifiers via headless services
  • Per-pod persistent storage that survives rescheduling

PostgreSQL was initially deployed as a Deployment. Then I migrated it to a StatefulSet to understand:

  • How DNS works with headless services
  • How PersistentVolumeClaims attach to specific pods
  • Why ordered startup matters for clustered databases
  • How rollout behavior changes with stateful workloads

Services: The Networking Glue

Services provide stable networking in an environment where pod IPs are ephemeral.

ClusterIP Services (Internal):

apiVersion: v1
kind: Service
metadata:
  name: redis
spec:
  type: ClusterIP
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379
Enter fullscreen mode Exit fullscreen mode

This creates a stable endpoint at redis.default.svc.cluster.local that load-balances across Redis pods.

Key Learning: Services are not pods. Services are stable DNS names that route to pods. When pods die and are recreated with new IPs, the Service continues working.

Ingress: External Traffic Routing

Ingress defines HTTP routing rules, but requires an Ingress Controller to actually process traffic.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
spec:
  rules:
  - host: oggy.local
    http:
      paths:
      - path: /vote
        pathType: Prefix
        backend:
          service:
            name: voting-service
            port:
              number: 80
      - path: /result
        pathType: Prefix
        backend:
          service:
            name: result-service
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

The Ingress Flow:

  1. User makes request to oggy.local/vote
  2. Request hits Ingress Controller
  3. Controller evaluates Ingress rules
  4. Traffic forwards to voting-service on port 80
  5. Service load-balances to backend pods

TLS Termination

This project implements TLS termination at the Ingress level, enabling HTTPS access.

Creating TLS certificates:

# Generate self-signed certificate for local development
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout oggy.key \
  -out oggy.crt \
  -subj "/CN=oggy.local/O=oggy.local"
Enter fullscreen mode Exit fullscreen mode

Creating Kubernetes TLS Secret:

apiVersion: v1
kind: Secret
metadata:
  name: oggy-tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: <base64-encoded-cert>
  tls.key: <base64-encoded-key>
Enter fullscreen mode Exit fullscreen mode

Or create it directly from files:

kubectl create secret tls oggy-tls \
  --cert=oggy.crt \
  --key=oggy.key
Enter fullscreen mode Exit fullscreen mode

Ingress with TLS:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
spec:
  tls:
  - hosts:
    - oggy.local
    secretName: oggy-tls
  rules:
  - host: oggy.local
    http:
      paths:
      - path: /vote
        pathType: Prefix
        backend:
          service:
            name: voting-service
            port:
              number: 80
Enter fullscreen mode Exit fullscreen mode

Now the application is accessible via HTTPS:

  • Voting: https://oggy.local:8443/vote
  • Results: https://oggy.local:8443/result

Key Learning: TLS termination at the Ingress means:

  • Traffic from user to Ingress Controller is encrypted (HTTPS)
  • Traffic from Ingress to backend Services is unencrypted (HTTP)
  • Certificates are managed centrally, not per-service
  • Backend services don't need to handle TLS

Traffic Flow: Internal vs External

Internal Communication

All services communicate using DNS-based service discovery:

voting-frontend → redis.default.svc.cluster.local
worker → redis.default.svc.cluster.local  
worker → db.default.svc.cluster.local
result-frontend → db.default.svc.cluster.local
Enter fullscreen mode Exit fullscreen mode

No pod IPs. No hardcoded addresses. Pure service discovery.

External Access

Users access the application through HTTPS with TLS termination:

  • Voting: https://oggy.local:8443/vote
  • Results: https://oggy.local:8443/result

The Ingress Controller handles:

  • TLS termination (decrypting HTTPS traffic)
  • Path-based routing to appropriate Services
  • Load balancing across backend pods

What Broke and How I Fixed It

Problem 1: Pod IPs Keep Changing

What happened: I initially tried connecting services using pod IPs. Pods got rescheduled, IPs changed, everything broke.

Root cause: Pods are ephemeral. Their IPs are not stable.

Solution: Use Services as stable endpoints. Services maintain consistent DNS names regardless of pod lifecycle.

# ❌ Wrong: Hardcoding pod IP
REDIS_HOST: "10.244.0.5"

# ✅ Right: Using service name
REDIS_HOST: "redis"
Enter fullscreen mode Exit fullscreen mode

Problem 2: Ingress Resources Did Nothing

What happened: Created Ingress resources. Nothing worked. Traffic never reached the apps.

Root cause: Ingress resources are just configuration. They require an Ingress Controller to actually process traffic.

Solution: Installed nginx-ingress controller separately:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
Enter fullscreen mode Exit fullscreen mode

Think of it like this:

  • Ingress Resource = routing rules (the "what")
  • Ingress Controller = traffic processor (the "how")

Problem 3: Service Names Didn't Resolve Across Namespaces

What happened: Services in different namespaces couldn't find each other using short names.

Root cause: DNS resolution in Kubernetes is namespace-scoped by default.

Solution: Use fully qualified domain names (FQDN):

# Within same namespace
REDIS_HOST: "redis"

# Across namespaces
REDIS_HOST: "redis.production.svc.cluster.local"
Enter fullscreen mode Exit fullscreen mode

DNS resolution follows this pattern:

  1. redis → searches current namespace
  2. redis.production → searches production namespace
  3. redis.production.svc.cluster.local → explicit FQDN

Problem 4: Ingress Controller Wouldn't Schedule

What happened: Ingress Controller pod stuck in Pending state. Never scheduled to a node.

Root cause: Local cluster had node taints and labels that prevented scheduling.

Solution: Added tolerations and adjusted node selector:

tolerations:
- key: "node-role.kubernetes.io/control-plane"
  operator: "Exists"
  effect: "NoSchedule"
Enter fullscreen mode Exit fullscreen mode

Learning: Cloud clusters and local clusters (kind, minikube) have different default configurations. Local clusters often taint control-plane nodes to prevent workload scheduling.


Configuration Management: Secrets and ConfigMaps

ConfigMaps for Non-Sensitive Data

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  REDIS_HOST: "redis"
  DB_HOST: "db"
  DB_NAME: "votes"
Enter fullscreen mode Exit fullscreen mode

ConfigMaps store configuration as key-value pairs that can be injected into pods.

Secrets for Sensitive Data

apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  DB_PASSWORD: cG9zdGdyZXM=  # base64 encoded
Enter fullscreen mode Exit fullscreen mode

Critical: Secrets are base64-encoded, not encrypted by default. For production, use encryption at rest or external secret managers (Vault, AWS Secrets Manager, etc.).

Injecting Configuration into Pods

env:
- name: REDIS_HOST
  valueFrom:
    configMapKeyRef:
      name: app-config
      key: REDIS_HOST
- name: DB_PASSWORD
  valueFrom:
    secretKeyRef:
      name: app-secrets
      key: DB_PASSWORD
Enter fullscreen mode Exit fullscreen mode

Rolling Updates: Zero-Downtime Deployments

Kubernetes Deployments support rolling updates out of the box.

Update Strategy:

strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 1
    maxSurge: 1
Enter fullscreen mode Exit fullscreen mode

What happens during an update:

  1. Kubernetes creates 1 new pod (maxSurge: 1)
  2. Waits for new pod to be ready
  3. Terminates 1 old pod (maxUnavailable: 1)
  4. Repeats until all pods are updated

Testing rolling updates:

# Update image version
kubectl set image deployment/voting-app voting-app=voting-app:v2

# Watch the rollout
kubectl rollout status deployment/voting-app

# Rollback if needed
kubectl rollout undo deployment/voting-app
Enter fullscreen mode Exit fullscreen mode

Zero downtime. Zero manual intervention.


Debugging Distributed Systems

Essential Debugging Commands

Check pod status:

kubectl get pods
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs <pod-name> --previous  # logs from crashed container
Enter fullscreen mode Exit fullscreen mode

Test service connectivity:

# Create debug pod
kubectl run debug --image=nicolaka/netshoot -it --rm -- /bin/bash

# Inside debug pod
nslookup redis
nslookup redis.default.svc.cluster.local
curl http://voting-service/vote
Enter fullscreen mode Exit fullscreen mode

Check ingress:

kubectl get ingress
kubectl describe ingress app-ingress
Enter fullscreen mode Exit fullscreen mode

Verify service endpoints:

kubectl get endpoints redis
Enter fullscreen mode Exit fullscreen mode

This shows which pod IPs the service is routing to. If empty, your selector doesn't match any pods.


Local Cluster Setup

Creating a kind cluster:

# Create cluster with ingress port mappings
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 8080
    protocol: TCP
  - containerPort: 443
    hostPort: 8443
    protocol: TCP
EOF
Enter fullscreen mode Exit fullscreen mode

Install nginx-ingress for kind:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Enter fullscreen mode Exit fullscreen mode

Configure local DNS:

# Add hostname to /etc/hosts
echo "127.0.0.1 oggy.local" | sudo tee -a /etc/hosts
Enter fullscreen mode Exit fullscreen mode

Deploy the application:

kubectl apply -f namespace.yaml
kubectl apply -f configMap.yaml
kubectl apply -f secrets.yaml
kubectl apply -f tls-secret.yaml
kubectl apply -f .
Enter fullscreen mode Exit fullscreen mode

Access the application:

  • HTTP: http://oggy.local:8080/vote
  • HTTPS: https://oggy.local:8443/vote (with TLS)
  • Results: https://oggy.local:8443/result

Note: Your browser will show a security warning for the self-signed certificate. This is expected for local development.


Project Structure

.
├── README.md
├── namespace.yaml              # Namespace definition
├── configMap.yaml              # ConfigMap for app configuration
├── secrets.yaml                # Secrets for sensitive data
├── tls-secret.yaml            # TLS certificates for HTTPS
├── oggy.crt                   # TLS certificate file
├── oggy.key                   # TLS private key file
├── deployment-postgres.yaml   # PostgreSQL Deployment
├── deployment-redis.yaml      # Redis Deployment
├── deployment-result.yaml     # Results Frontend Deployment
├── deployment-voting.yaml     # Voting Frontend Deployment
├── deployment-worker.yaml     # Worker Deployment
├── service-postgres.yaml      # PostgreSQL Service
├── service-redis.yaml         # Redis Service
├── service-results.yaml       # Results Service
├── service-voting.yaml        # Voting Service
└── ingress.yaml               # Ingress with TLS configuration
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

This project isn't about running containers in Kubernetes. It's about understanding how Kubernetes actually works.

Mental Models That Clicked:

  1. Kubernetes networking is service-driven, not pod-driven

    Pods are ephemeral. Services are stable. Always route through Services.

  2. Ingress requires both rules and a controller

    Rules define routing logic. Controllers implement the logic.

  3. DNS resolution is namespace-scoped

    Short names work within namespaces. Cross-namespace requires FQDNs.

  4. Local clusters behave differently than cloud clusters

    Taints, tolerations, and storage classes vary significantly.

  5. StatefulSets are fundamentally different from Deployments

    Stable identities, ordered operations, and per-pod storage make stateful workloads possible.

Once these mental models clicked, advanced Kubernetes concepts (NetworkPolicies, PodDisruptionBudgets, HorizontalPodAutoscalers) started making sense.


What's Next

This project covers the fundamentals plus TLS termination. Real production systems add even more:

  • Automated certificate management with cert-manager (vs manual certificates)
  • Persistent volumes with storage classes for stateful workloads
  • Horizontal Pod Autoscalers for dynamic scaling based on metrics
  • Network Policies for pod-to-pod traffic control and security
  • Resource limits and requests for scheduling and QoS
  • Health checks (liveness and readiness probes)
  • Monitoring with Prometheus and Grafana
  • Log aggregation with ELK or Loki

But you can't build those without understanding the fundamentals first.


Setup and Deployment

Clone the repository:

git clone https://github.com/yourusername/kubernetes-voting-app.git
cd kubernetes-voting-app
Enter fullscreen mode Exit fullscreen mode

Deploy everything:

kubectl apply -f .
Enter fullscreen mode Exit fullscreen mode

Verify deployment:

kubectl get pods
kubectl get svc
kubectl get ingress
Enter fullscreen mode Exit fullscreen mode

Cleanup:

kubectl delete -f .
Enter fullscreen mode Exit fullscreen mode

Resources

Official Kubernetes Docs:

Tools Used:

Alternative local clusters: minikube, k3s, Docker Desktop


Source Code
GitHub

Questions or feedback? Drop a comment below. Happy to discuss Kubernetes architecture, debugging strategies, or anything else related to distributed systems.


Kubernetes #DevOps #Microservices #Docker #CloudNative #DistributedSystems

Top comments (0)