DEV Community

Cover image for Why I Chose Cilium Instead of kube-proxy
jobayer ahmed
jobayer ahmed

Posted on

Why I Chose Cilium Instead of kube-proxy

What is kube-proxy?

kube-proxy is Kubernetes' default network proxy that runs on each node and maintains network rules (via iptables/ipvs) to route traffic to the correct pods. It's been the standard since Kubernetes' early days.

Why Cilium is Better

1. eBPF vs iptables — The Core Difference

kube-proxy relies on iptables (or ipvs), which was never designed for dynamic, large-scale container environments:

  • iptables rules are linear — every packet traverses all rules sequentially
  • Adding/removing rules requires full table rewrites (not atomic updates)
  • At 10,000+ services, iptables becomes a serious bottleneck

Cilium uses eBPF (extended Berkeley Packet Filter), which runs sandboxed programs directly in the Linux kernel:

  • Packet processing happens at the kernel level — no userspace overhead
  • Rules update atomically and instantly
  • O(1) lookups instead of O(n) linear scans

2. Performance

At 1000+ services, Cilium has been shown to reduce latency by ~30–60% and CPU by ~50% in benchmarks.

3. Deep Observability (Hubble)

Cilium ships with Hubble — a built-in network observability layer:

  • Real-time Layer 3*/4/7 visibility* (HTTP, gRPC, Kafka, DNS)
  • Flow logs without modifying app code
  • Service dependency mapping
  • kube-proxy has zero native observability

4. Security — Network Policies on Steroids

  • kube-proxy has no security role — you need a separate CNI for NetworkPolicy
  • Cilium enforces L3/L4/L7 policies natively in eBPF
  • Can enforce policies based on DNS names, HTTP paths, gRPC methods
  • Mutual TLS (mTLS) via Cilium Service Mesh (no Envoy sidecar needed)

5. No kube-proxy = Fully Replacing It

Cilium can run in kube-proxy replacement mode
(kubeProxyReplacement: true), where it handles:

  • ClusterIP, NodePort, LoadBalancer services
  • ExternalIPs, HostPort
  • Session affinity — all in eBPF

6. Service Mesh Without Sidecars

Traditional service meshes (Istio, Linkerd) inject sidecars into every pod, adding latency and resource overhead. Cilium’s sidecarless service mesh handles mTLS and L7 routing at the kernel level — no sidecars needed.

Limitations / When Cilium May Not Be Ideal

  • Requires Linux kernel 4.9+ (ideally 5.10+) — older kernels get limited eBPF features
  • **Steeper learning curve **than kube-proxy
  • More complex to troubleshoot
  • Overhead in very small clusters (under ~10 nodes) may not justify the complexity

Calico in eBPF mode is the closest real alternative — it offers similar eBPF benefits and is better known in enterprise environments. But Cilium’s eBPF implementation is generally considered more mature and feature-complete.

Bottom Line

Use Cilium if you care about performance at scale, security depth, and observability. It’s the most production-proven eBPF-native CNI and the best kube-proxy replacement available today. For teams already on Calico, its eBPF mode is a reasonable middle ground.

Top comments (0)