I Built a Multi-Service Kubernetes App and Here's What Actually Broke
The Goal
This wasn't a "follow the tutorial" project. The goal was simple: understand how real distributed systems actually work inside Kubernetes.
Not just "deploy containers," but truly understand:
- How services discover each other
- How internal networking routes traffic
- How ingress exposes applications externally
- How TLS termination works at the edge
- How secrets and configs propagate
- How rolling updates affect uptime
- The difference between stateful and stateless workloads
- How DNS resolution works across namespaces
- How to debug when things inevitably break
The result is a multi-service voting application that mirrors real production microservices architecture.
The Architecture
Five independent services, each with a specific role:
- Voting Frontend - Stateless web UI where users cast votes
- Results Frontend - Stateless web UI displaying real-time results
- Redis - Message queue for asynchronous processing
- PostgreSQL - Persistent database storing vote data
- Worker Service - Background processor consuming from queue and writing to database
The traffic flow follows a typical 3-tier distributed pattern:
User → Frontend → Queue → Worker → Database → Results
System Architecture Diagram
Architecture Overview:
The diagram above illustrates the complete system architecture showing:
- External Layer: User traffic entering via HTTPS (port 8443)
- Ingress Layer: TLS termination and path-based routing
- Application Layer: Stateless frontend services (voting and results)
- Data Layer: Redis (message queue) and PostgreSQL (persistent storage)
- Processing Layer: Worker service connecting queue to database
Key Design Decisions:
- All internal communication uses ClusterIP Services
- External access is controlled through Ingress with host-based routing
- Services are isolated and communicate only through defined interfaces
- No hardcoded IPs anywhere - everything uses service discovery
Kubernetes Components Deep Dive
Deployments
Deployments manage the stateless workloads in this system:
- Voting frontend
- Results frontend
- Redis
- Worker
- PostgreSQL (initially)
What Deployments Enable:
- Rolling updates without downtime
- Declarative replica scaling
- Self-healing when pods crash
- Controlled rollout strategies
Every time I updated a deployment, Kubernetes created new pods, waited for them to be ready, then terminated old ones. Zero downtime.
StatefulSets (The Deep Learning)
StatefulSets were explored separately to understand how stateful workloads differ fundamentally from stateless ones.
Key StatefulSet Characteristics:
- Stable, persistent pod identities (pod-0, pod-1, etc.)
- Ordered, graceful deployment and scaling
- Stable network identifiers via headless services
- Per-pod persistent storage that survives rescheduling
PostgreSQL was initially deployed as a Deployment. Then I migrated it to a StatefulSet to understand:
- How DNS works with headless services
- How PersistentVolumeClaims attach to specific pods
- Why ordered startup matters for clustered databases
- How rollout behavior changes with stateful workloads
Services: The Networking Glue
Services provide stable networking in an environment where pod IPs are ephemeral.
ClusterIP Services (Internal):
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
type: ClusterIP
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
This creates a stable endpoint at redis.default.svc.cluster.local that load-balances across Redis pods.
Key Learning: Services are not pods. Services are stable DNS names that route to pods. When pods die and are recreated with new IPs, the Service continues working.
Ingress: External Traffic Routing
Ingress defines HTTP routing rules, but requires an Ingress Controller to actually process traffic.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: oggy.local
http:
paths:
- path: /vote
pathType: Prefix
backend:
service:
name: voting-service
port:
number: 80
- path: /result
pathType: Prefix
backend:
service:
name: result-service
port:
number: 80
The Ingress Flow:
- User makes request to
oggy.local/vote - Request hits Ingress Controller
- Controller evaluates Ingress rules
- Traffic forwards to
voting-serviceon port 80 - Service load-balances to backend pods
TLS Termination
This project implements TLS termination at the Ingress level, enabling HTTPS access.
Creating TLS certificates:
# Generate self-signed certificate for local development
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout oggy.key \
-out oggy.crt \
-subj "/CN=oggy.local/O=oggy.local"
Creating Kubernetes TLS Secret:
apiVersion: v1
kind: Secret
metadata:
name: oggy-tls
namespace: default
type: kubernetes.io/tls
data:
tls.crt: <base64-encoded-cert>
tls.key: <base64-encoded-key>
Or create it directly from files:
kubectl create secret tls oggy-tls \
--cert=oggy.crt \
--key=oggy.key
Ingress with TLS:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
tls:
- hosts:
- oggy.local
secretName: oggy-tls
rules:
- host: oggy.local
http:
paths:
- path: /vote
pathType: Prefix
backend:
service:
name: voting-service
port:
number: 80
Now the application is accessible via HTTPS:
- Voting:
https://oggy.local:8443/vote - Results:
https://oggy.local:8443/result
Key Learning: TLS termination at the Ingress means:
- Traffic from user to Ingress Controller is encrypted (HTTPS)
- Traffic from Ingress to backend Services is unencrypted (HTTP)
- Certificates are managed centrally, not per-service
- Backend services don't need to handle TLS
Traffic Flow: Internal vs External
Internal Communication
All services communicate using DNS-based service discovery:
voting-frontend → redis.default.svc.cluster.local
worker → redis.default.svc.cluster.local
worker → db.default.svc.cluster.local
result-frontend → db.default.svc.cluster.local
No pod IPs. No hardcoded addresses. Pure service discovery.
External Access
Users access the application through HTTPS with TLS termination:
-
Voting:
https://oggy.local:8443/vote -
Results:
https://oggy.local:8443/result
The Ingress Controller handles:
- TLS termination (decrypting HTTPS traffic)
- Path-based routing to appropriate Services
- Load balancing across backend pods
What Broke and How I Fixed It
Problem 1: Pod IPs Keep Changing
What happened: I initially tried connecting services using pod IPs. Pods got rescheduled, IPs changed, everything broke.
Root cause: Pods are ephemeral. Their IPs are not stable.
Solution: Use Services as stable endpoints. Services maintain consistent DNS names regardless of pod lifecycle.
# ❌ Wrong: Hardcoding pod IP
REDIS_HOST: "10.244.0.5"
# ✅ Right: Using service name
REDIS_HOST: "redis"
Problem 2: Ingress Resources Did Nothing
What happened: Created Ingress resources. Nothing worked. Traffic never reached the apps.
Root cause: Ingress resources are just configuration. They require an Ingress Controller to actually process traffic.
Solution: Installed nginx-ingress controller separately:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
Think of it like this:
- Ingress Resource = routing rules (the "what")
- Ingress Controller = traffic processor (the "how")
Problem 3: Service Names Didn't Resolve Across Namespaces
What happened: Services in different namespaces couldn't find each other using short names.
Root cause: DNS resolution in Kubernetes is namespace-scoped by default.
Solution: Use fully qualified domain names (FQDN):
# Within same namespace
REDIS_HOST: "redis"
# Across namespaces
REDIS_HOST: "redis.production.svc.cluster.local"
DNS resolution follows this pattern:
-
redis→ searches current namespace -
redis.production→ searches production namespace -
redis.production.svc.cluster.local→ explicit FQDN
Problem 4: Ingress Controller Wouldn't Schedule
What happened: Ingress Controller pod stuck in Pending state. Never scheduled to a node.
Root cause: Local cluster had node taints and labels that prevented scheduling.
Solution: Added tolerations and adjusted node selector:
tolerations:
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
Learning: Cloud clusters and local clusters (kind, minikube) have different default configurations. Local clusters often taint control-plane nodes to prevent workload scheduling.
Configuration Management: Secrets and ConfigMaps
ConfigMaps for Non-Sensitive Data
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
REDIS_HOST: "redis"
DB_HOST: "db"
DB_NAME: "votes"
ConfigMaps store configuration as key-value pairs that can be injected into pods.
Secrets for Sensitive Data
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
DB_PASSWORD: cG9zdGdyZXM= # base64 encoded
Critical: Secrets are base64-encoded, not encrypted by default. For production, use encryption at rest or external secret managers (Vault, AWS Secrets Manager, etc.).
Injecting Configuration into Pods
env:
- name: REDIS_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: REDIS_HOST
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: DB_PASSWORD
Rolling Updates: Zero-Downtime Deployments
Kubernetes Deployments support rolling updates out of the box.
Update Strategy:
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
What happens during an update:
- Kubernetes creates 1 new pod (maxSurge: 1)
- Waits for new pod to be ready
- Terminates 1 old pod (maxUnavailable: 1)
- Repeats until all pods are updated
Testing rolling updates:
# Update image version
kubectl set image deployment/voting-app voting-app=voting-app:v2
# Watch the rollout
kubectl rollout status deployment/voting-app
# Rollback if needed
kubectl rollout undo deployment/voting-app
Zero downtime. Zero manual intervention.
Debugging Distributed Systems
Essential Debugging Commands
Check pod status:
kubectl get pods
kubectl describe pod <pod-name>
kubectl logs <pod-name>
kubectl logs <pod-name> --previous # logs from crashed container
Test service connectivity:
# Create debug pod
kubectl run debug --image=nicolaka/netshoot -it --rm -- /bin/bash
# Inside debug pod
nslookup redis
nslookup redis.default.svc.cluster.local
curl http://voting-service/vote
Check ingress:
kubectl get ingress
kubectl describe ingress app-ingress
Verify service endpoints:
kubectl get endpoints redis
This shows which pod IPs the service is routing to. If empty, your selector doesn't match any pods.
Local Cluster Setup
Creating a kind cluster:
# Create cluster with ingress port mappings
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
EOF
Install nginx-ingress for kind:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
Configure local DNS:
# Add hostname to /etc/hosts
echo "127.0.0.1 oggy.local" | sudo tee -a /etc/hosts
Deploy the application:
kubectl apply -f namespace.yaml
kubectl apply -f configMap.yaml
kubectl apply -f secrets.yaml
kubectl apply -f tls-secret.yaml
kubectl apply -f .
Access the application:
- HTTP:
http://oggy.local:8080/vote - HTTPS:
https://oggy.local:8443/vote(with TLS) - Results:
https://oggy.local:8443/result
Note: Your browser will show a security warning for the self-signed certificate. This is expected for local development.
Project Structure
.
├── README.md
├── namespace.yaml # Namespace definition
├── configMap.yaml # ConfigMap for app configuration
├── secrets.yaml # Secrets for sensitive data
├── tls-secret.yaml # TLS certificates for HTTPS
├── oggy.crt # TLS certificate file
├── oggy.key # TLS private key file
├── deployment-postgres.yaml # PostgreSQL Deployment
├── deployment-redis.yaml # Redis Deployment
├── deployment-result.yaml # Results Frontend Deployment
├── deployment-voting.yaml # Voting Frontend Deployment
├── deployment-worker.yaml # Worker Deployment
├── service-postgres.yaml # PostgreSQL Service
├── service-redis.yaml # Redis Service
├── service-results.yaml # Results Service
├── service-voting.yaml # Voting Service
└── ingress.yaml # Ingress with TLS configuration
Key Takeaways
This project isn't about running containers in Kubernetes. It's about understanding how Kubernetes actually works.
Mental Models That Clicked:
Kubernetes networking is service-driven, not pod-driven
Pods are ephemeral. Services are stable. Always route through Services.Ingress requires both rules and a controller
Rules define routing logic. Controllers implement the logic.DNS resolution is namespace-scoped
Short names work within namespaces. Cross-namespace requires FQDNs.Local clusters behave differently than cloud clusters
Taints, tolerations, and storage classes vary significantly.StatefulSets are fundamentally different from Deployments
Stable identities, ordered operations, and per-pod storage make stateful workloads possible.
Once these mental models clicked, advanced Kubernetes concepts (NetworkPolicies, PodDisruptionBudgets, HorizontalPodAutoscalers) started making sense.
What's Next
This project covers the fundamentals plus TLS termination. Real production systems add even more:
- Automated certificate management with cert-manager (vs manual certificates)
- Persistent volumes with storage classes for stateful workloads
- Horizontal Pod Autoscalers for dynamic scaling based on metrics
- Network Policies for pod-to-pod traffic control and security
- Resource limits and requests for scheduling and QoS
- Health checks (liveness and readiness probes)
- Monitoring with Prometheus and Grafana
- Log aggregation with ELK or Loki
But you can't build those without understanding the fundamentals first.
Setup and Deployment
Clone the repository:
git clone https://github.com/yourusername/kubernetes-voting-app.git
cd kubernetes-voting-app
Deploy everything:
kubectl apply -f .
Verify deployment:
kubectl get pods
kubectl get svc
kubectl get ingress
Cleanup:
kubectl delete -f .
Resources
Official Kubernetes Docs:
Tools Used:
- kind - Kubernetes IN Docker (used for this project)
- nginx-ingress - Ingress Controller
- kubectl - Kubernetes CLI
Alternative local clusters: minikube, k3s, Docker Desktop
Source Code
GitHub
Questions or feedback? Drop a comment below. Happy to discuss Kubernetes architecture, debugging strategies, or anything else related to distributed systems.

Top comments (0)