DEV Community

NTCTech
NTCTech

Posted on • Originally published at rack2cloud.com

Kubernetes Is Moving Past Ingress. Most Clusters Aren't.

The Kubernetes Gateway API project is not forcing you to migrate away from Ingress NGINX. There is no hard cutoff date, no deprecation warning in your cluster logs, no upgrade blocker. The project has simply moved on — and that quiet, undramatic shift is exactly what makes it operationally dangerous.

Ingress NGINX is no longer where the ecosystem is investing. The upstream Kubernetes project has signaled its preference for Gateway API as the path forward for L4 and L7 routing. Security patches will still land. Critical bugs will still get fixed. But active development, new capabilities, and ecosystem investment have already moved elsewhere. The clusters that miss this signal will not break immediately. They will drift — accumulating annotation complexity, diverging routing patterns, and tooling debt that compounds silently until the migration cost becomes a project.

This is a Day-2 problem before it becomes a Day-1 problem. The Kubernetes Day-2 failure patterns follow a consistent signature: the ecosystem moves, the cluster doesn't, and the gap between them becomes the failure surface. Ingress is following the same pattern.

What Kubernetes Gateway API Actually Changes

Kubernetes Gateway API is not Ingress with a new name. That distinction matters because treating it as a drop-in replacement misses the architectural shift entirely.

Ingress is a single resource type owned by whoever manages the cluster. One team configures it, one team troubleshoots it, and the boundary between platform and application responsibility is buried in annotations — inconsistently, across clusters, by whoever touched it last.

Architecture diagram comparing the Kubernetes Ingress single-resource model with unclear ownership boundaries versus the Gateway API role-separated model with distinct Gateway and HTTPRoute objects owned by platform and application teams respectively.

Gateway API introduces role separation as a first-class architectural concept. The Gateway resource is owned by the platform team — it defines the infrastructure-level listener, the TLS termination, and the attachment policy. The HTTPRoute is owned by the application team — it defines the routing logic for their specific service. These are different Kubernetes objects, in different namespaces, with different RBAC boundaries. Platform teams can enforce policies at the Gateway level without touching application routing. Application teams can modify their routing without touching cluster-level infrastructure.

This is not a configuration change. It is a model change — from a shared, flat resource to a layered, role-aware architecture. Just as the service mesh vs eBPF debate forced teams to rethink where policy lives in the stack, Gateway API forces teams to rethink who owns routing and at what layer.

Gateway API also introduces more expressive routing — header-based routing, traffic splitting, and backend policy attachment — without the annotation explosion that Ingress configurations accumulate over time. What takes 15 lines of provider-specific annotations in Ingress takes 4 lines of typed spec in HTTPRoute.

The Day-2 Impact

The operational drift from staying on Ingress NGINX is not theoretical. It shows up in specific, measurable ways.

Annotation sprawl is the most visible symptom. Ingress NGINX's power comes from its annotation model — and that model has no governance layer. Teams add annotations to solve immediate problems. Those annotations accumulate across services, creating routing configurations that are difficult to audit, impossible to validate without deploying, and brittle under upgrade. 502 errors tied to MTU and DNS misconfigurations are frequently rooted in annotation interactions that nobody fully understands anymore.

Tooling fragmentation is the second symptom. As the Ingress NGINX ecosystem stagnates, the adjacent tooling — policy enforcement, traffic observability, routing validation — will increasingly assume Gateway API as the baseline. Teams running Ingress NGINX will find themselves excluded from new capabilities not through breaking changes, but through the quieter mechanism of features simply not being built for their stack.

Inconsistency across clusters is the third. Multi-cluster environments running different Ingress controllers, different annotation conventions, and different upgrade cadences are a platform engineering problem waiting to be scoped. Gateway API's typed, role-aware model is dramatically easier to standardize across clusters than Ingress's flat, annotation-based model.

The Trap

The pattern here is familiar. The VMware exit ramp was visible years before it became urgent — organizations that treated it as a future problem paid more for the migration than organizations that treated it as a current architectural decision. The egress cost modeling problem was the same: teams that modeled egress late discovered their architecture had already locked in the expensive paths.

"We'll migrate when we have to" is a reasonable operational policy for many things. For foundational routing infrastructure that touches every service in the cluster, it is a deferral that compounds. The migration from Ingress annotations to Gateway API HTTPRoutes is not technically complex — but it requires reviewing every routing configuration in the cluster, which takes time proportional to how long the annotations have been accumulating.

The teams that migrate with a plan will do it in a controlled sprint. The teams that migrate under pressure will do it during an incident.

What to Actually Do Now

This does not require an immediate migration. It requires three decisions made now rather than later.

Kubernetes Gateway API migration decision framework showing three postures — Stabilize, Abstract, and Migrate — based on annotation complexity and cluster profile, with Audit as the prerequisite step.

Audit your Ingress usage. How many services are behind Ingress? How many unique annotations are in use? How many of those annotations are provider-specific? The audit output tells you the migration complexity. If you have 200 services with 3 standard annotations each, the migration is straightforward. If you have 40 services with 15 custom annotations each, you have a platform refactoring project.

Identify your complexity ceiling. New services being deployed today — do they go on Ingress or Gateway API? Drawing a line at new service deployments costs nothing and begins the natural transition. Existing services with low routing complexity are migration candidates. Services with deeply customized annotation configurations are stabilization candidates — keep them on Ingress NGINX until the migration can be scoped properly.

Decide your posture. Three options, all legitimate depending on your environment:

  • Stabilize — keep existing services on Ingress NGINX, adopt Gateway API for new deployments
  • Abstract — build a platform engineering layer that targets both, migrate incrementally
  • Migrate — commit to a full migration on a defined timeline

What is not a posture is continuing to add complexity to Ingress configurations without a plan for where that complexity goes.

Kubernetes Gateway API is not a deadline. It is a direction. Get ahead of this one now, while the migration is a choice and not a constraint.


Architect's Verdict

The clusters that treat Gateway API as a future problem are making the same calculation that made VMware exits expensive and egress costs invisible until they weren't. The shift is not urgent in the way a CVE is urgent. It is urgent in the way technical debt is urgent — quietly, compounding, until the migration that could have been a sprint becomes a project.

Ingress NGINX is not broken. It will keep working. The question is not whether your cluster runs today — it is whether the routing architecture you are building today will be where the ecosystem is in two years. Gateway API is already there. Most clusters aren't.

Start with the audit. Everything else follows from what you find.


FAQ

Q: We use a managed Ingress controller from our cloud provider — does this apply to us?

Yes. The role separation model and ecosystem investment shift apply regardless of whether you're running open source Ingress NGINX, the AWS ALB Ingress Controller, GKE's managed Ingress, or any other provider-specific implementation. Most cloud providers already have Gateway API support in GA or beta — AWS Gateway API Controller, GKE Gateway, and Azure Application Gateway for Containers all implement the spec.

Q: Can Gateway API and Ingress run in the same cluster simultaneously?

Yes — and that is actually the recommended transition approach. New services can adopt HTTPRoute and Gateway resources while existing services remain on Ingress. Both controllers run independently without conflict. This is how you turn a platform migration into a rolling operational improvement rather than a disruptive cutover.

Q: Is Kubernetes Gateway API stable enough for production?

Yes. The core Kubernetes Gateway API resources — GatewayClass, Gateway, and HTTPRoute — graduated to GA in Kubernetes 1.24 and have been stable for over two years. Major implementations including Istio, Envoy Gateway, Cilium, Traefik, and all three major cloud providers support the stable API. In 2026, the question is not whether Gateway API is production-ready — it is whether your team has the operational familiarity with it yet.


References

Top comments (0)