DEV Community

Cover image for Kubernetes Networking Finally Explained (From User Pod User)
Harsh Mishra
Harsh Mishra

Posted on

Kubernetes Networking Finally Explained (From User Pod User)

Kubernetes Networking Finally Explained (From User → Pod → User)

A Complete Real-World Mental Model for Microservices on EKS

This article exists because most Kubernetes tutorials explain components — but never explain the **flow.

If you’ve ever thought:

  • “Why do we need a Load Balancer if Kubernetes already balances?”
  • “Where does NodePort actually fit?”
  • “What does Ingress really do?”
  • “How does one URL reach the correct microservice?”
  • “Why does this feel overcomplicated?”

Then this article is for you.

This is not a YAML guide.
This is not a beginner glossary.

This is the complete real-world picture, explained the way senior engineers actually reason about it.


The Promise of This Article

By the end, you will be able to mentally trace a single HTTP request from:

User → Internet → AWS → Kubernetes → Pod → back to User
Enter fullscreen mode Exit fullscreen mode

…and explain every hop without confusion.


Assumptions (Important)

We will keep things intentionally simple and realistic:

  • One AWS region
  • One Availability Zone
  • One EKS cluster
  • Multiple microservices
  • One Ingress Controller
  • One AWS Load Balancer
  • No service mesh
  • No multi-region complexity

This is how most real systems actually start.


The Core Idea (Read This First)

Kubernetes does not expose your applications.
Kubernetes exposes a single entry point, and everything else is internal.

Once you understand that, everything else becomes obvious.


The Components (High-Level)

Let’s list the main actors before connecting them:

  1. User (browser / mobile app)
  2. DNS
  3. AWS Load Balancer (ALB)
  4. NodePort
  5. Ingress Controller (NGINX / Envoy)
  6. Kubernetes Services (ClusterIP)
  7. Application Pods

Each of these exists for a very specific reason.


Step 1: Your Microservices (The Inside of the Cluster)

Let’s say you have three microservices:

  • user-service
  • order-service
  • payment-service

Each microservice has:

  • a Deployment
  • its own Service of type ClusterIP

Why ClusterIP?

Because:

  • These services are internal
  • They should never be accessed directly by users
  • They need stable DNS names

Example (conceptually):

user-service.default.svc.cluster.local
order-service.default.svc.cluster.local
payment-service.default.svc.cluster.local
Enter fullscreen mode Exit fullscreen mode

Important rule:

Microservices NEVER use NodePort. EVER.

If you remember only one rule from this article, make it that.


Step 2: The Ingress Controller (The Only Public Door)

Ingress is where most confusion happens, so let’s be precise.

What Ingress Is NOT

  • Not a Service
  • Not a Load Balancer
  • Not magic

What Ingress Actually Is

Ingress Controller = a Pod running a reverse proxy

Usually:

  • NGINX
  • Envoy
  • HAProxy

This pod:

  • receives HTTP requests
  • looks at hostname + path
  • forwards requests to the correct ClusterIP Service

Ingress does routing, not exposure.


Step 3: The Ingress Needs to Be Exposed (NodePort’s Job)

Ingress is just a Pod.
Pods are not reachable from outside Kubernetes.

So how does traffic reach it?

👉 This is where NodePort comes in.

Critical Clarification

NodePort exists ONLY to expose the Ingress Controller — not your apps.

You create one Service:

ingress-controller-service
type: NodePort
Enter fullscreen mode Exit fullscreen mode

This does the following:

  • Opens a high port (30000–32767)
  • On every worker node
  • Forwards traffic to Ingress Controller pods

That’s it.

NodePort is:

  • a door
  • a bridge
  • not smart
  • not user-facing

Step 4: AWS Load Balancer (The Front Door)

Now comes the AWS Load Balancer (ALB).

What ALB Does

  • Accepts traffic from the internet
  • Terminates TLS (HTTPS)
  • Performs health checks
  • Chooses a healthy target

What ALB Does NOT Do

  • It does not know about Kubernetes Services
  • It does not know about Pods
  • It does not do /x → service-x

What Does ALB Target?

In a simple EKS setup:

ALB → EC2 Node IP : NodePort
Enter fullscreen mode Exit fullscreen mode

This means:

  • ALB balances across nodes
  • Each node forwards traffic to Ingress pods

👉 THIS is what the Load Balancer is balancing.

Even in one AZ.


Step 5: The Complete Request Flow (End to End)

Let’s trace a real request:

GET https://api.myapp.com/orders
Enter fullscreen mode Exit fullscreen mode

Step-by-Step Flow

1. User sends HTTPS request
2. DNS resolves api.myapp.com → ALB
3. ALB accepts request, terminates TLS
4. ALB selects a healthy node
5. ALB forwards request to NodeIP:NodePort
6. Node forwards request to Ingress Controller pod
7. Ingress inspects:
   - Host: api.myapp.com
   - Path: /orders
8. Ingress matches rule:
/orders → order-service
9. Ingress forwards request to order-service (ClusterIP)
10. Service load-balances to an order-service pod
11. Application processes request
12. Response flows back the same path
Enter fullscreen mode Exit fullscreen mode

Reverse path:

Pod → Service → Ingress → Node → ALB → User
Enter fullscreen mode Exit fullscreen mode

Where Does /x → x-service Actually Happen?

This is extremely important:

Component Does URL routing?
AWS ALB ❌ No
NodePort ❌ No
Service ❌ No
Ingress Controller ✅ YES

Ingress is the only place that understands:

  • paths
  • hostnames
  • routing rules

So Why Not Expose Each Service with NodePort?

Because that would be terrible.

Problems:

  • Multiple open ports
  • Harder DNS
  • No centralized routing
  • No TLS management
  • Security nightmare

Ingress exists to give you:

  • one public entry point
  • one port (443)
  • many internal services

Final Correct Mental Model (Lock This In)

User
 ↓
DNS
 ↓
AWS Load Balancer
 ↓
NodePort (Ingress Service)
 ↓
Ingress Controller Pod (reverse proxy)
 ↓
ClusterIP Service (specific microservice)
 ↓
Application Pod
Enter fullscreen mode Exit fullscreen mode

One Sentence That Explains Everything

Microservices use ClusterIP, Ingress does routing, NodePort exposes Ingress, and the Load Balancer balances entry points.

If you can say that confidently, you understand Kubernetes networking.


Common Myths (Debunked)

❌ “Services create NodePorts by default”

No. Only NodePort and LoadBalancer services do.

❌ “Ingress replaces Services”

No. Ingress routes to Services.

❌ “Load Balancer balances pods”

No. Services balance pods. ALB balances nodes or ingress pods.

❌ “Ingress is between two Services”

No. Ingress is a Pod.


Why This Architecture Scales Naturally

What changes as traffic grows?

  • More nodes
  • More ingress pods
  • More app pods

What does NOT change?

  • URLs
  • architecture
  • flow
  • mental model

That’s good design.


Why Kubernetes Feels Hard (But Isn’t)

Kubernetes is not complex — it is layered.

Each layer solves exactly one problem:

Layer Problem
Load Balancer Internet entry
NodePort Cluster bridge
Ingress Routing
Service Pod balancing
Pod Execution

Once you see the layers, the confusion disappears.


Final Takeaway

If you ever forget the details, remember this:

Users never talk to pods.
Pods never talk to the internet.
Ingress is the translator.


Closing

This article exists so you never have to rewatch a dozen half-explanations again.

Bookmark it.
Share it.
Come back to it when Kubernetes starts feeling “magical” again.

Because now — it isn’t.

Happy shipping 🚀

Top comments (0)