DEV Community

Matheus
Matheus

Posted on • Originally published at releaserun.com

Kubernetes Gateway API vs Ingress vs LoadBalancer: What to Use in 2026

I've watched teams "just add an annotation" and accidentally turn their ingress controller into a production API nobody owns.

If you're choosing between Service type LoadBalancer, Ingress, and Gateway API in 2026, you're really choosing who gets to change traffic rules, how safely they can do it, and how ugly rollback feels at 2am.

The Blunt 2026 Rule: Pick the Layer First

Most debates happen at L7, but plenty of outages start at L4 when someone changes the wrong shared object.

  • Expose raw TCP/UDP: Use Service type LoadBalancer. Boring L4 plumbing, smallest number of moving parts.
  • Route HTTP(S) in one team's cluster: Keep Ingress if your controller and annotations already behave like a stable product in your org.
  • Share one edge across teams and namespaces: Use Gateway API. Built-in "platform owns the Gateway, app teams own Routes" split.

Decision Tree (The One You Can Actually Use)

This bit me when we tried to "standardize ingress." We ended up with six different annotation dialects and a week of guess-and-check every time we touched timeouts.

  • You need multi-tenant delegation: Choose Gateway API when platform owns listeners and app teams need to attach routes without PR wars on one giant Ingress.
  • You need reusable policy without annotation soup: Start with Gateway API, but verify your controller's policy CRDs and conformance.
  • You can tolerate one IP per service: Keep Service type LoadBalancer for "single service, single VIP" setups. It isolates blast radius.
  • You want basic host/path routing and nothing else: Ingress still works. Pretending it's dead will waste your time.

Requirements Checklist

Migrate when you hit at least one of these constraints, otherwise you're signing up for work that won't pay you back.

Multi-tenant Routing and Cross-namespace Delegation

If platform owns the edge and app teams own their routes, Ingress turns into a constant negotiation. One giant shared Ingress creates merge conflicts and shared blast radius.

Gateway API bakes in the ownership split: platform owns the Gateway, app teams own HTTPRoute. The listener controls which namespaces can attach routes.

Policy Attachment

Ingress started thin, and real behavior moved into controller annotations and side CRDs. That's how you end up with YAML that looks "Kubernetes-y" but only works on one controller.

Gateway API pushes policy attachment into the model, but you still need governance. Otherwise teams will recreate annotation sprawl using a new set of implementation-specific policy objects.

TLS/SNI and Certificate Ownership

I've seen cert ownership turn into a political problem. Ingress often forces you into "copy secrets into app namespaces" or "centralize everything and block teams."

Gateway listeners let platform own cert references and let teams attach routes by hostname.

Typed Routing

HTTPRoute, GRPCRoute, TCPRoute, TLSRoute, UDPRoute. If you run gRPC seriously, "Ingress supports gRPC" usually means "your controller has conventions." Gateway API makes routing intent explicit with typed Routes, but controller support varies.

Compatibility Reality Check

Ignore the GitHub commit count. What matters is whether your chosen controller, cloud load balancer, and security rules support the features you plan to depend on.

  • Service (LoadBalancer) portability: Medium. The API stays stable, but cloud-specific annotations differ.
  • Ingress portability: Low to medium. Production features live in controller-specific annotations.
  • Gateway API portability: Medium to high when you stick to what your controller proves in conformance tests.

Migration That Survives Production (Ingress to Gateway API)

Do not "big bang" this. I've seen that movie and it ends with a rollback and a postmortem full of screenshots.

Phase 0: Inventory What You Really Use

Export every Ingress. List every annotation in use. Group by controller.

Capture baseline signals: 4xx/5xx rates, p95 latency, TLS handshake errors, upstream resets.

Phase 1: Pick a Controller and Prove Support

Gateway API is a spec. Your controller determines how it behaves, how it fails, and what "supported" means.

Validate conformance level, the exact features you need, and how it maps to your cloud load balancer model.

Phase 2: Run Gateway Alongside Ingress

Most clusters can run both at once. Separate namespaces, separate external addresses, explicit class selection.

Keep the old ingress path alive until dashboards show equivalence under real load.

Phase 3: One Platform Gateway, Then Migrate One Service

I prefer one shared Gateway owned by the platform team, at least at the start. App teams attach HTTPRoutes from their namespaces with explicit hostname ownership.

  • Platform-owned Gateway: Listener on 443, hostname wildcard you actually own, allowedRoutes restricted by namespace selector.
  • App-owned HTTPRoute: Hostname, path matches, backendRefs with weights for controlled cutover testing.

Phase 4: Cut Traffic with DNS or Weights

DNS cutover stays low drama if you manage TTLs and have a rollback plan. Keep the old path ready. Roll back the moment error budgets start burning.

I don't trust "known issues: none" from any project. Build your own checks and pick a rollback trigger before you start.

Phase 5: Translate Annotations into Policies

This is where migrations fail. Teams copy every annotation into a controller-specific policy CRD and then nobody can reason about behavior.

Decide which policies platform owns (timeouts, max body size, TLS floor, WAF), which policies tenants can tune, and how you enforce that boundary with admission and RBAC.

Production Pitfalls (The 2am List)

  • Annotation sprawl returns: Set a small supported policy set and block random escape hatches.
  • Delegation foot-guns: Prevent hostname collisions, lock down certificate secret references.
  • Catch-all behavior surprises: Old Ingress default backends often hide broken hostnames.
  • Traffic splitting under load: Retries and HTTP/2 multiplexing can skew "weights."
  • Upgrades magnify instability: Fix node churn before you move edge traffic onto a new controller.

Bottom Line

Use Service type LoadBalancer when you need L4 exposure with minimal abstraction.

Use Ingress when your HTTP routing stays basic and your annotation set already behaves like a governed standard.

Use Gateway API when you need multi-team delegation, typed routes (including gRPC), and policy attachment that doesn't turn annotations into your real API.

FAQ

Should I migrate from Ingress to Gateway API in 2026?
If you're starting fresh, absolutely use Gateway API. If you have working Ingress configs, migrate only when you hit a concrete limitation.

Can I run Ingress and Gateway API side by side?
Yes, and this is the recommended migration approach. Most controllers support both APIs simultaneously.

What's the difference between Gateway API and a service mesh?
Gateway API handles north-south traffic (external to cluster). Meshes handle east-west (service-to-service). They complement each other.


Related Reading

Originally published on ReleaseRun.

Top comments (0)