DEV Community

Adil Khan
Adil Khan

Posted on

I Built a Multi-Service Kubernetes App and Here's What Actually Broke

I spent the last few weeks deploying a multi-service voting application on Kubernetes.

Not because I needed a voting app. Because I needed to understand how Kubernetes actually handles real application traffic.

There's a gap between running a single container in a pod and understanding how multiple services discover each other, how traffic flows internally, and how external requests actually reach your application.

This project closed that gap for me.

What I Built

A voting system with five independent components:

  • Voting frontend (where users vote)
  • Results frontend (where users see results)
  • Redis (acting as a queue)
  • PostgreSQL (persistent storage)
  • Worker service (processes votes asynchronously)

Each component runs in its own container. Each is managed independently by Kubernetes. None of them know pod IPs. Everything communicates through service discovery.

This mirrors how real microservices work in production.

The Architecture Isn't Random

I didn't pick this setup arbitrarily. This is what actual distributed systems look like:

  • Frontend services are stateless and can scale horizontally
  • Data services are isolated for persistence
  • Communication happens via stable network abstractions
  • External traffic enters through a controlled entry point

Kubernetes handles the orchestration. I needed to understand how.

Kubernetes Objects I Actually Used

Deployments
These manage the workloads. They define replica counts and ensure pods get recreated if they fail. Every major component runs as a Deployment.

Pods
The smallest unit Kubernetes schedules. They're ephemeral. They die and get recreated. You never access them directly.

Services
This is where it clicked for me. Services provide stable DNS names and IPs. Pods can change IPs constantly. Services don't. All internal communication goes through Services.

I used two types:

  • ClusterIP for internal-only communication (Redis, PostgreSQL)
  • NodePort temporarily for testing frontends before I understood Ingress

Ingress
Defines HTTP routing rules for external traffic. Host-based and path-based routing through a single entry point.

Here's what tripped me up: Ingress resources don't do anything by themselves.

Ingress Controller
This is the actual component that receives and processes traffic. It runs as a pod and dynamically configures itself based on Ingress rules.

Without an Ingress Controller installed, your Ingress rules are useless. I learned this the hard way.

How Traffic Actually Flows

Internal Traffic

Inside the cluster:

  1. Voting frontend sends votes to Redis using the Redis Service name
  2. Worker reads from Redis using the Redis Service name
  3. Worker writes results to PostgreSQL using the database Service name
  4. Results frontend reads from PostgreSQL using the database Service name

No pod IPs anywhere. Service DNS gets resolved automatically by Kubernetes.

External Traffic

From the browser to the application:

  1. User sends HTTP request
  2. Request hits the Ingress Controller
  3. Ingress rules get evaluated
  4. Traffic forwards to the correct Service
  5. Service load-balances to backend pods

Ingress operates at the HTTP level. It's the production-grade way to expose applications.

What Actually Broke (and What I Learned)

Pod IPs Keep Changing

Pods were getting recreated automatically. Their IPs changed every time. Hardcoding IPs didn't work.

Solution: Use Services. Always. Services provide stable endpoints. This is what they're designed for.

Service Types Confused Me

I didn't understand why there were multiple Service types or when to use which one.

Solution: ClusterIP is for internal communication only. NodePort exposes services on node IPs (useful for testing, not for production). Ingress is the right way to handle external HTTP traffic.

Ingress Didn't Work

I created Ingress resources. Traffic still wasn't reaching my apps.

Solution: You need an Ingress Controller installed separately. The Ingress resource is just configuration. The controller is what actually processes traffic. Once I installed the controller, everything worked.

Ingress Controller Wouldn't Schedule

The controller pod was stuck in pending state.

Solution: In my local cluster, I needed to fix node labels and tolerations so the controller could schedule on the control-plane node. This doesn't happen in cloud environments, but it matters in local setups.

Local Networking Doesn't Work Like Cloud

External access from my browser didn't work directly in my container-based local cluster.

Solution: Port forwarding. I forwarded the Ingress Controller port locally. This simulates how cloud load balancers work but adapted for local development.

Service Names Didn't Resolve Everywhere

Service names weren't resolving across namespaces or from outside the cluster.

Solution: Kubernetes service DNS is namespace-scoped by default. I learned to use fully qualified domain names when needed and understood where DNS resolution actually works.

What I Actually Understand Now

Before this project, I could write Kubernetes manifests. But I didn't really get how the pieces connected.

Now I understand:

  • Kubernetes networking is service-driven, not pod-driven
  • Ingress needs both rules and a controller to function
  • Local clusters behave differently than cloud clusters
  • Service discovery happens through DNS, not hardcoded IPs
  • Debugging requires understanding both the platform and the application

Why This Matters

This isn't about running containers. It's about understanding how Kubernetes:

  • Routes traffic between services
  • Discovers services dynamically
  • Separates internal and external networking
  • Enforces declarative state

Once this mental model clicked, advanced topics started making sense.

Real Takeaway

Build it once to make it work.

Break it to understand why it works.

I could have just deployed this app using a tutorial and called it done. But I wouldn't have learned how service discovery actually functions, or why Ingress controllers exist, or what happens when pods get recreated.

The debugging forced me to understand the platform, not just the syntax.

If you're learning Kubernetes, pick a multi-service application and deploy it. Then break it. Then fix it. That's where the understanding comes from.

What's been the hardest part of Kubernetes for you? Drop a comment.


Full code and setup instructions:
kubernetes-sample-voting-app-project1

Architecture diagram and detailed breakdown:

kubernetes #devops #learning #microservices

Top comments (0)