DEV Community

Cover image for Kubernetes Networking Deep Dive — Part 2: MetalLB, Nginx Ingress, and NetworkPolicy
Otobong Edoho
Otobong Edoho

Posted on

Kubernetes Networking Deep Dive — Part 2: MetalLB, Nginx Ingress, and NetworkPolicy

Part 2 of 2 — Replacing NodePort with real LoadBalancer IPs, routing traffic through an Ingress controller with clean hostnames, and locking down pod-to-pod communication with NetworkPolicy. All on bare metal. All verified with real tests.

Series navigation:

  • Part 1: Cluster setup with kubeadm, foundational workloads, deploying a full-stack app with ConfigMaps, Secrets, StatefulSets, and CI pipelines → Read Part 1
  • Part 2 (you are here): MetalLB, Nginx Ingress Controller, clean hostnames, and NetworkPolicy enforcement with real tests

Full source code: All Kubernetes manifests referenced in this article are available at
github.com/otie16/k8s-homelab-vm-project


If you followed Part 1, you have a working two-node Kubernetes cluster running a full-stack application — Next.js frontend, Django REST API, and PostgreSQL. You can reach it at 192.168.1.100:30001 and 192.168.1.100:30000.

That works. But it's not how production Kubernetes is meant to work.

In production, you don't tell people "hit port 30247 on any node IP." You give them https://app.yourcompany.com. Traffic enters through a single controlled gateway, gets routed to the right service, and never exposes internal cluster topology.

This is what Part 2 builds. By the end you'll have:

  • MetalLB giving your bare metal cluster real LoadBalancer IPs
  • Nginx Ingress Controller as the single entry point for all HTTP traffic
  • Clean hostnamesapp.oty-k8s.local and api.oty-k8s.local on port 80
  • NetworkPolicy enforcing that only the right pods can talk to each other — verified with real tests

Why NodePort Is Not Enough

NodePort binds a random high port (30000-32767) on every node in the cluster and forwards traffic to your service. It works, but:

  • Random high ports are ugly and hard to remember
  • You're exposing every node's IP — if a node changes, external config breaks
  • There's no hostname-based routing — you can't serve two services on port 80
  • No TLS termination
  • No single entry point to apply rate limiting, auth, or observability

The production pattern is:

Internet / Local Network
        ↓
   LoadBalancer IP (single IP, port 80/443)
        ↓
   Ingress Controller (Nginx)
        ↓ routes by hostname
   ┌──────────────────────────────────┐
   │ app.oty-k8s.local → frontend   │
   │ api.oty-k8s.local → backend    │
   └──────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

One IP. One entry point. Clean hostnames. That's what we're building.


The Problem with LoadBalancer on Bare Metal

On cloud Kubernetes (EKS, GKE, AKS), when you create a Service with type: LoadBalancer, the cloud provider automatically provisions a real load balancer and assigns it an external IP.

On bare metal with kubeadm, there's no cloud provider. Create a LoadBalancer service and you'll see this forever:

NAME                      TYPE           EXTERNAL-IP
ingress-nginx-controller  LoadBalancer   <pending>
Enter fullscreen mode Exit fullscreen mode

<pending> means Kubernetes is waiting for an external controller to assign the IP. On bare metal, nothing does that by default.

MetalLB solves this. It's a load balancer implementation designed specifically for bare metal clusters. It watches for LoadBalancer services and assigns real IPs from a pool you define. Your network learns about these IPs via ARP (Layer 2 mode) and routes traffic to the right node automatically.


Step 1 — Install MetalLB

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
Enter fullscreen mode Exit fullscreen mode

Wait for MetalLB pods to be ready:

kubectl wait --for=condition=ready pod \
  -l app=metallb \
  -n metallb-system \
  --timeout=90s

kubectl get pods -n metallb-system
Enter fullscreen mode Exit fullscreen mode

Expected:

NAME                          READY   STATUS
controller-xxx                1/1     Running
speaker-xxx (on master)       1/1     Running
speaker-yyy (on worker)       1/1     Running
Enter fullscreen mode Exit fullscreen mode

The controller manages IP address assignments. The speaker pods run on each node and announce IP ownership via ARP — when your laptop asks "who has 192.168.1.200?", the MetalLB speaker on the node holding that service responds.

Configure the IP Address Pool

Pick a range on your local network that's outside your DHCP range so your router doesn't assign those IPs to other devices. Most home routers use .100-.199 for DHCP so .200-.220 is typically safe:

cat <<EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: local-pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.200-192.168.1.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: local-advert
  namespace: metallb-system
spec:
  ipAddressPools:
  - local-pool
EOF
Enter fullscreen mode Exit fullscreen mode

The L2Advertisement tells MetalLB to use Layer 2 mode — simple ARP-based announcement that works on any network without router configuration. No BGP setup needed.

Verify:

kubectl get ipaddresspool -n metallb-system
kubectl get l2advertisement -n metallb-system
Enter fullscreen mode Exit fullscreen mode

Step 2 — Install Nginx Ingress Controller

The Ingress Controller is what actually reads your Ingress resources and configures itself to route traffic. We're using the community Nginx Ingress Controller.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.5/deploy/static/provider/cloud/deploy.yaml
Enter fullscreen mode Exit fullscreen mode

Wait for it:

kubectl wait --for=condition=ready pod \
  -l app.kubernetes.io/component=controller \
  -n ingress-nginx \
  --timeout=90s
Enter fullscreen mode Exit fullscreen mode

Check if MetalLB assigned a real IP:

kubectl get svc -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode

Expected:

NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)
ingress-nginx-controller  LoadBalancer   10.96.x.x      192.168.1.200   80:3xxxx/TCP,443:3xxxx/TCP
Enter fullscreen mode Exit fullscreen mode

That 192.168.1.200 in EXTERNAL-IP is MetalLB working. That single IP is now your cluster's front door for all HTTP/HTTPS traffic.


Step 3 — Update Services to ClusterIP

Your Django and Next.js services are currently NodePort. With Ingress in place, external traffic flows: client → MetalLB IP → Ingress Controller → ClusterIP Service → Pod. NodePort is no longer needed.

Update both services to type: ClusterIP:

kubectl apply -f /home/oty-k8s/k8s/backend-service.yaml
kubectl apply -f /home/oty-k8s/k8s/frontend-service.yaml
Enter fullscreen mode Exit fullscreen mode

See → k8s/backend-service.yaml | k8s/frontend-service.yaml


Step 4 — Create the Ingress Resource

An Ingress resource is a routing table — it tells the Ingress controller which hostnames map to which services.

kubectl apply -f /home/oty-k8s/k8s/ingress.yaml
Enter fullscreen mode Exit fullscreen mode

See the full manifest → k8s/ingress.yaml

The manifest routes:

  • app.oty-k8s.localnextjs-frontend service on port 3000
  • api.oty-k8s.localdjango-backend service on port 8000

Common mistake worth knowing about: The apiVersion must be networking.k8s.io/v1 — note the s in k8s. networking.k8.io/v1 (missing the s) causes no matches for kind "Ingress" in version "networking.k8.io/v1". Kubernetes error messages don't highlight the typo so this one wastes time.

Verify the Ingress has the MetalLB IP assigned:

kubectl get ingress -n k8s-vm-app
Enter fullscreen mode Exit fullscreen mode

Expected:

NAME                 CLASS   HOSTS                                    ADDRESS         PORT(S)
k8s-vm-app-ingress   nginx   app.oty-k8s.local,api.oty-k8s.local  192.168.1.200   80
Enter fullscreen mode Exit fullscreen mode

Step 5 — Configure Local DNS

Since oty-k8s.local isn't a real domain, you need to tell your machine to resolve it to the MetalLB IP.

On Windows (open Notepad as Administrator):

C:\Windows\System32\drivers\etc\hosts
Enter fullscreen mode Exit fullscreen mode

On macOS/Linux:

sudo nano /etc/hosts
Enter fullscreen mode Exit fullscreen mode

Add:

192.168.1.200   app.oty-k8s.local
192.168.1.200   api.oty-k8s.local
Enter fullscreen mode Exit fullscreen mode

Now open your browser:

http://app.oty-k8s.local              → Next.js task manager
http://api.oty-k8s.local/api/tasks/   → Django REST API
http://api.oty-k8s.local/health/      → Health check
Enter fullscreen mode Exit fullscreen mode

No port numbers. Clean hostnames. Traffic flows through the Ingress controller on port 80.


Step 6 — NetworkPolicy

Your application is accessible via clean URLs. But at this point any pod in the cluster can reach any other pod — including your PostgreSQL database directly. That's not acceptable.

NetworkPolicy is Kubernetes' built-in firewall for pod-to-pod traffic. By default all pods communicate freely. Once you apply a NetworkPolicy to a pod, it becomes deny-all for the specified traffic direction, and only connections you explicitly allow get through.

The Rules We Want

Ingress Controller → Frontend  (port 3000) ✅
Ingress Controller → Backend   (port 8000) ✅
Frontend           → Backend   (port 8000) ✅
Backend            → Postgres  (port 5432) ✅
Anything else      → Postgres  (port 5432) ❌
Anything else      → Backend   (port 8000) ❌
Anything else      → Frontend  (port 3000) ❌
Enter fullscreen mode Exit fullscreen mode

Apply the policies:

kubectl apply -f /home/oty-k8s/k8s/networkpolicy.yaml

kubectl get networkpolicy -n k8s-vm-app
Enter fullscreen mode Exit fullscreen mode

Expected:

NAME              POD-SELECTOR
postgres-policy   app=postgres
backend-policy    app=django-backend
frontend-policy   app=nextjs-frontend
Enter fullscreen mode Exit fullscreen mode

See the full manifest → k8s/networkpolicy.yaml

Critical YAML Detail — AND vs OR Logic

NetworkPolicy has a subtle YAML structure that's easy to get wrong. The indentation determines whether multiple conditions are ANDed or ORed:

# AND — pod must match BOTH selectors (namespace AND pod label)
ingress:
- from:
  - namespaceSelector:
      matchLabels:
        kubernetes.io/metadata.name: ingress-nginx
    podSelector:
      matchLabels:
        app: controller

# OR — pod matches EITHER selector (namespace OR pod label)
ingress:
- from:
  - namespaceSelector:
      matchLabels:
        kubernetes.io/metadata.name: ingress-nginx
- from:
  - podSelector:
      matchLabels:
        app: nextjs-frontend
Enter fullscreen mode Exit fullscreen mode

Whether the selectors are under the same - from: list item (AND) or separate - from: items (OR) is determined by indentation. Getting this wrong silently blocks legitimate traffic and is the most common NetworkPolicy mistake.


Step 7 — Testing That NetworkPolicy Actually Works

This is the most important step. Applying the manifests without errors doesn't mean they do what you intended. You need to verify both directions — what's blocked and what's allowed.

Create a Test Pod

Spin up a busybox pod with no labels matching any policy:

kubectl run nettest \
  --image=busybox \
  --restart=Never \
  -n k8s-vm-app \
  -- sleep 3600
Enter fullscreen mode Exit fullscreen mode

This pod represents anything that shouldn't have access — a compromised container, a misconfigured service, or an attacker who somehow got a shell inside the cluster.


Test 1 — Postgres BLOCKED from random pod ❌

kubectl exec -n k8s-vm-app nettest \
  -- nc -zv postgres 5432 -w 3
Enter fullscreen mode Exit fullscreen mode

Expected result:

nc: postgres (10.x.x.x:5432): Connection timed out
command terminated with exit code 1
Enter fullscreen mode Exit fullscreen mode

The postgres NetworkPolicy denies all ingress except from app=django-backend. The nettest pod has no such label — blocked.


Test 2 — Postgres REACHABLE from backend pod ✅

kubectl exec -n k8s-vm-app \
  $(kubectl get pod -n k8s-vm-app -l app=django-backend \
    -o jsonpath='{.items[0].metadata.name}') \
  -- nc -zv postgres 5432 -w 3
Enter fullscreen mode Exit fullscreen mode

Expected result:

postgres (10.x.x.x:5432) open
Enter fullscreen mode Exit fullscreen mode

The backend pod has label app=django-backend which matches the postgres policy's allow rule.


Test 3 — Backend BLOCKED from random pod ❌

kubectl exec -n k8s-vm-app nettest \
  -- nc -zv django-backend 8000 -w 3
Enter fullscreen mode Exit fullscreen mode

Expected result:

nc: django-backend (10.x.x.x:8000): Connection timed out
command terminated with exit code 1
Enter fullscreen mode Exit fullscreen mode

Test 4 — Backend REACHABLE from frontend pod ✅

kubectl exec -n k8s-vm-app \
  $(kubectl get pod -n k8s-vm-app -l app=nextjs-frontend \
    -o jsonpath='{.items[0].metadata.name}') \
  -- nc -zv django-backend 8000 -w 3
Enter fullscreen mode Exit fullscreen mode

Expected result:

django-backend (10.x.x.x:8000) open
Enter fullscreen mode Exit fullscreen mode

The frontend pod has label app=nextjs-frontend which matches the backend policy's allow rule.


Test 5 — End-to-end through Ingress ✅

curl http://api.oty-k8s.local/health/
Enter fullscreen mode Exit fullscreen mode

Expected:

{"status": "ok"}
Enter fullscreen mode Exit fullscreen mode
curl http://api.oty-k8s.local/api/tasks/
Enter fullscreen mode Exit fullscreen mode

Expected:

[]
Enter fullscreen mode Exit fullscreen mode

Or open http://app.oty-k8s.local in your browser — full task manager UI loading through the Ingress controller.


Clean Up

kubectl delete pod nettest -n k8s-vm-app
Enter fullscreen mode Exit fullscreen mode

Test Results Summary

Test From To Port Expected What it proves
1 random pod postgres 5432 ❌ timed out DB unreachable from untrusted sources
2 django-backend postgres 5432 ✅ open App can reach its DB
3 random pod django-backend 8000 ❌ timed out API unreachable from untrusted sources
4 nextjs-frontend django-backend 8000 ✅ open Frontend can call the API
5 laptop browser app.oty-k8s.local 80 ✅ 200 OK Full stack works end to end

All five passing means your NetworkPolicy enforces exactly what you intended. The cluster is now network-isolated at the pod level.


Understanding the Full Traffic Flow

With everything in place, here's what happens when you open http://app.oty-k8s.local:

Browser (your laptop)
    │
    │ DNS resolves app.oty-k8s.local → 192.168.1.200
    │ HTTP GET / on port 80
    ↓
192.168.1.200 — MetalLB
(assigned to ingress-nginx LoadBalancer service,
 announced via ARP to your local network)
    │
    │ Host header: app.oty-k8s.local
    ↓
Nginx Ingress Controller Pod (ingress-nginx namespace)
    │
    │ Matches rule: app.oty-k8s.local → nextjs-frontend:3000
    │ NetworkPolicy allows ingress-nginx → nextjs-frontend
    ↓
Next.js Pod (k8s-vm-app namespace)
    │
    │ API call to django-backend:8000
    │ NetworkPolicy allows nextjs-frontend → django-backend
    ↓
Django Backend Pod
    │
    │ Query to postgres:5432
    │ NetworkPolicy allows django-backend → postgres
    ↓
PostgreSQL Pod (StatefulSet, PersistentVolume on node disk)
Enter fullscreen mode Exit fullscreen mode

Every hop is intentional. Every connection is explicitly allowed. Anything not on this list is blocked at the network level.


Common Issues and Fixes

no matches for kind "Ingress" in version "networking.k8.io/v1"
Typo — networking.k8.io should be networking.k8s.io. Fix:

sed -i 's/networking.k8.io/networking.k8s.io/g' ingress.yaml
Enter fullscreen mode Exit fullscreen mode

MetalLB IP stays <pending>
Check MetalLB pods are running and the IPAddressPool is configured. Also verify no other service already claimed the IP range.

kubectl get pods -n metallb-system
kubectl get ipaddresspool -n metallb-system
Enter fullscreen mode Exit fullscreen mode

Ingress returns 404 for all paths
The ingressClassName: nginx doesn't match the installed controller class, or the controller isn't ready yet.

kubectl get ingressclass
# Should show: nginx
Enter fullscreen mode Exit fullscreen mode

NetworkPolicy blocking allowed connections
Check that pod labels match exactly what the policy expects:

kubectl get pods -n k8s-vm-app --show-labels
Enter fullscreen mode Exit fullscreen mode

Then review the AND vs OR indentation in your NetworkPolicy YAML.

503 Service Temporarily Unavailable from Ingress
The Ingress controller can reach the service but no pods are passing their readiness probe.

kubectl get endpoints -n k8s-vm-app
# All services should show pod IPs, not <none>
Enter fullscreen mode Exit fullscreen mode

What's Next

You now have a production-style Kubernetes networking setup. Both services are accessible via clean hostnames through a single Ingress entry point, and NetworkPolicy enforces traffic isolation at the network level with verified tests.

The remaining phases of this homelab roadmap:

Phase 3 — Storage and Stateful Systems
PostgreSQL HA with streaming replication as a StatefulSet, backup strategies, and Longhorn for distributed storage across nodes.

Phase 4 — Observability and Security
Full kube-prometheus-stack deployment, custom Grafana dashboards for application metrics, RBAC, ServiceAccounts, PodSecurityAdmission, and encrypting Secrets at rest.

Phase 5 — GitOps with ArgoCD
Helm charts for the application, ArgoCD for continuous deployment with Git as the single source of truth, and Horizontal Pod Autoscaler for automatic scaling under load.


Key Takeaways

NodePort is for learning, not production. It exposes every node's IP on random high ports with no hostname routing. The production pattern is LoadBalancer → Ingress → ClusterIP.

MetalLB is essential for bare metal. Without it, LoadBalancer services pend forever. With it, you get the same experience as cloud Kubernetes — services get real IPs automatically from a pool you control.

NetworkPolicy is deny-by-default once applied. As soon as you add a NetworkPolicy to a pod, all traffic not explicitly allowed is blocked. This is exactly the right security posture — allowlist, not blocklist.

Always test your NetworkPolicy — don't assume it works. The manifest applying without errors doesn't mean it does what you intended. The AND vs OR YAML structure is subtle and easy to get wrong silently. Test both blocked and allowed paths every time.

The Ingress controller is just Nginx. Demystify it: it's a regular nginx reverse proxy that watches the Kubernetes API and updates its config automatically. When routing breaks, you can kubectl exec into the controller pod and inspect the nginx config directly.


Source code: github.com/otie16/k8s-homelab-vm-project

Part 1: Cluster Setup, Workloads, and a Full-Stack App

Oty is a Lead DevOps/Cloud Engineer and DevOps mentor. Follow for more hands-on infrastructure content.


Tags: Kubernetes DevOps Networking MetalLB Nginx Ingress NetworkPolicy Platform Engineering Homelab Cloud Native

Top comments (0)