The Kubernetes landscape is shifting once again. For years, the community-maintained Ingress NGINX controller has been the default gateway for millions of clusters. It was the easy choice, the default setting, and the starting point for almost everyone. However, with the announcement of its retirement and impending End of Life, teams are now facing a mandatory migration. While there are many alternatives in the CNCF landscape, the most logical successor for those who wish to remain within the NGINX ecosystem is the official F5 NGINX Ingress Controller.
It is vital to clarify exactly which software we are discussing to avoid commercial confusion. This guide focuses strictly on migrating to the F5 NGINX Open Source Ingress Controller. This is the free, open-source version maintained by the NGINX engineering team at F5. It is not the paid NGINX Plus version. This version provides a production-grade traffic management solution without licensing fees, but it operates differently than the community version you are leaving behind.
This comprehensive guide will walk you through the architectural differences, the migration strategy, and the technical implementation details required to move your production workloads from kubernetes/ingress-nginx to nginxinc/kubernetes-ingress before the support window closes.
Understanding the Two Controllers
Before typing a single kubectl command, you must understand why a simple image swap is impossible. While both controllers use NGINX under the hood, their control planes are entirely different software projects.
The community version, which resides in the kubernetes GitHub organization, relies heavily on Lua scripts. It injects Lua code into the NGINX configuration to handle dynamic reconfiguration, traffic splitting, and metric collection. This architecture allowed for massive flexibility and a "kitchen sink" approach to features, but it also introduced complexity and occasional instability during reloads.
The F5 NGINX Open Source version, residing in the nginxinc GitHub organization, takes a different approach. It uses a Go-based control plane that generates native NGINX configuration files. It avoids Lua scripting for core logic. This design philosophy aligns strictly with upstream NGINX best practices. It results in a more stable and predictable behavior, but it also means that some of the "magic" annotations you used in the community version do not exist or behave differently.
The most significant change you will encounter is the configuration philosophy. The community version relied almost exclusively on standard Kubernetes Ingress resources decorated with dozens of annotations. The F5 Open Source version supports standard Ingress resources, but it encourages the use of Custom Resource Definitions, specifically the VirtualServer and VirtualServerRoute resources. These CRDs provide a structured, type-safe way to configure complex routing without relying on brittle annotations.
Phase 1: The Audit and Inventory
Do not start by installing the new controller. Start by auditing your current estate. Because the annotation syntax is different, every single Ingress resource in your cluster will need to be modified.
Run a comprehensive audit of all Ingress resources. You are looking specifically for the metadata.annotations section. Identify which annotations are in use. Common ones include rewrite targets, SSL redirects, proxy buffer sizing, and client body size limits. You need to map these to their counterparts in the F5 ecosystem.
Pay special attention to the nginx.ingress.kubernetes.io/configuration-snippet annotation. This was a popular "escape hatch" in the community version that allowed users to inject raw NGINX config lines directly into the location block. The F5 Open Source controller is much stricter about security. While it does support snippets via nginx.org/server-snippets or nginx.org/location-snippets, relying on them is discouraged. This migration is the perfect opportunity to refactor those snippets into proper configuration fields if possible.
You must also identify any reliance on third-party modules. The community image included several modules like ModSecurity or OpenTracing by default. The F5 Open Source image is leaner. If you rely on specific modules, you may need to build a custom image based on the official F5 source, although the standard image covers the vast majority of use cases.
Phase 2: Side-by-Side Deployment
The only safe way to migrate is a side-by-side deployment. You will install the F5 NGINX Open Source controller in the same cluster as your existing community controller but in a different namespace. They will run simultaneously, allowing you to migrate applications one by one.
You will use Helm to install the F5 version. It is critical to configure the ingressClass correctly so it does not conflict with your existing controller. The community version typically claims the class named nginx. We will configure the F5 version to claim a class named nginx-f5.
Add the NGINX Stable Helm repository. Ensure you are using the official repo from F5 NGINX. When you configure your values.yaml for the Helm chart, you must verify you are pulling the open-source image, which is usually tagged as nginx/nginx-ingress.
In your Helm configuration, ensure you set controller.ingressClass.name to nginx-f5. You should also set controller.ingressClass.create to true and controller.setAsDefaultIngress to false. It is imperative that you do not set the new controller as the default yet, or you might accidentally hijack traffic from new deployments meant for the old controller.
Once deployed, you will have two LoadBalancer services. One is your existing production entry point. The other is the new F5 entry point. You will likely see a new External IP assigned to the F5 controller service. This IP is your testing ground.
Phase 3: Translating Configuration
This is the most labor-intensive part of the process. You must translate your Ingress resources. You have two paths: sticking with standard Ingress resources or adopting the VirtualServer CRD.
If you choose to stick with standard Ingress resources, you must swap the annotation prefixes. The community version uses nginx.ingress.kubernetes.io/. The F5 version uses nginx.org/. For example, nginx.ingress.kubernetes.io/client-body-buffer-size becomes nginx.org/client-body-buffer-size.
However, not all mappings are 1-to-1. The most notorious difference is URL rewriting. In the community version, you likely used a regular expression in the path and a rewrite-target annotation like /$2 to strip prefixes. The F5 Open Source controller handles this differently. It does not support regex capture groups in the same way within standard Ingress annotations. Instead, you define the path prefix and use the nginx.org/rewrites annotation to explicitly map a service to a rewrite path.
Because of these nuances, it is highly recommended to adopt the VirtualServer CRD path. The VirtualServer resource is a native Kubernetes custom resource designed by the NGINX team. It removes the need for "annotation soup."
In a VirtualServer resource, you define upstreams and routes. Rewrites are handled as a first-class field called pathRewrite. SSL settings are handled in a tls block. This structure is readable and validated by the Kubernetes API server before it ever reaches the NGINX configuration. This prevents invalid configurations from crashing the controller, a scenario that was unfortunately common with the community version's Lua implementation.
Here is a conceptual example of the shift. In the old world, you might have an Ingress with a complex regex path and three annotations to handle CORS and rewrites. In the new world with VirtualServer, you have a YAML file that looks almost like a standard NGINX config file but structured as Kubernetes YAML. You define a route for /api, set the action to pass to your upstream service, and add a rewritePath field. It is cleaner and easier to debug.
Phase 4: Handling TLS and Secrets
The way the two controllers handle TLS secrets is similar, but there is a catch regarding default certificates. The community controller often generated a fake self-signed certificate if a secret was missing. The F5 Open Source controller expects valid secrets.
Ensure your TLS secrets are present in the same namespace as the VirtualServer or Ingress resource. The F5 controller supports TLS termination just as you would expect. If you are using cert-manager, the integration remains largely the same. You will continue to use the cert-manager annotations on the Ingress resource to request certificates.
If you switch to VirtualServer CRDs, cert-manager can still be used, but you often need to create a separate minimal Ingress resource specifically for the HTTP-01 challenge, or use DNS-01 validation. Alternatively, cert-manager has experimental support for Gateway API or specific support for NGINX CRDs depending on the version, but keeping a "shim" Ingress for certificate issuance is a common pattern.
Phase 5: The Migration Workflow
With your new controller running and your configurations translated, you are ready to move traffic. Do not attempt to move everything at once. Pick a low-criticality service to start.
Create the new VirtualServer (or updated Ingress) for that service. In the spec, ensure you are targeting the nginx-f5 ingress class. Apply the resource.
At this stage, the application is exposed via both controllers. The old controller is serving traffic via the old LoadBalancer IP. The new F5 controller is ready to serve traffic via the new LoadBalancer IP.
You cannot rely on DNS yet. You must verify the behavior manually. Use curl to send requests to the new LoadBalancer IP, but manually set the Host header to match your application's domain name.
Verify that the routing works. Verify that rewrites are stripping prefixes correctly. Verify that SSL termination is presenting the correct certificate. Check the logs of the F5 controller pod. The F5 logs are generally less verbose than the community version, but they will clearly indicate if a configuration was rejected.
Once you are satisfied with the curl tests, you have a decision to make regarding the cutover.
Phase 6: DNS Cutover and Traffic shifting
For the actual cutover, you will update your DNS records. Change the A record (or CNAME) for your application to point to the LoadBalancer IP of the new F5 controller.
DNS propagation takes time. During this TTL window, some users will hit the old controller, and some will hit the new one. Since both controllers point to the same backend Kubernetes Services, the application itself should handle this seamlessly. User sessions should ideally be stored in an external store like Redis to prevent session loss during the switch, but this is a general best practice regardless of ingress migration.
If you require a more granular rollout, you can use weighted DNS if your DNS provider supports it. Route 5% of traffic to the new IP and monitor error rates. If the F5 controller metrics show a spike in 4xx or 5xx errors, revert the DNS change immediately.
Phase 7: Advanced Considerations and Gotchas
There are several specific behaviors where the F5 Open Source controller diverges from the community version.
One major area is Header manipulation. The community version made it very easy to add, remove, or change headers via annotations. The F5 Open Source version supports this as well, but the syntax in VirtualServer is more structured. You will use requestHeaders and responseHeaders actions within your route definitions.
Another gotcha is WebSocket support. The community version required specific annotations to increase timeouts for WebSockets. The F5 version generally handles WebSockets natively, but you still need to ensure your timeouts in the VirtualServer definition are long enough to prevent the connection from being closed prematurely.
Rate limiting is another area of difference. The community version used the limit-rpm annotation which used a distinct Redis or local memory store. The F5 Open Source controller supports basic rate limiting, but advanced, distributed rate limiting is often a feature reserved for the commercial NGINX Plus version. You must verify if the basic rate limiting capabilities of the Open Source version meet your needs. If you relied heavily on complex rate limiting logic in the community controller, you might need to implement that logic at the application layer or via a service mesh sidecar.
Finally, consider the scope of the Ingress Class. By default, the F5 controller watches all namespaces. If you are running a multi-tenant cluster, you might want to restrict the controller to watch only specific namespaces. This is configured via the command-line arguments in the Helm chart deployment.
Phase 8: Decommissioning
Once you have migrated all applications and the DNS TTLs have fully expired, the old community controller should be sitting idle. Check its logs to ensure no traffic is being processed.
You can now delete the old Ingress resources. After that, you can uninstall the community controller helm chart. Finally, you can delete the nginx ingress class (or whatever the old class was named) and promote the nginx-f5 class to be the default class for the cluster.
To do this, you will update your F5 Helm release to set controller.ingressClass.name to nginx (if you want to revert to the standard name) or simply set controller.setAsDefaultIngress to true on the existing nginx-f5 class. Updating the name usually requires a redeployment, so sticking with the new name and just making it default is often safer.
Conclusion
The retirement of the community Ingress NGINX controller is a significant event in the Kubernetes lifecycle. It forces a move that many teams have been putting off. While the migration to the F5 NGINX Open Source controller requires effort, specifically in rewriting configurations and testing, the destination is a more stable and professional platform.
By moving to the F5 Open Source controller, you align your infrastructure with the upstream NGINX roadmap. You gain access to the VirtualServer CRDs, which offer a superior way to manage complex routing compared to the old annotation-based approach.
Do not wait until the final weeks before the End of Life date. This migration involves touching every public-facing endpoint in your cluster. Start the audit now, spin up the side-by-side environment, and begin the process of moving your workloads to the modern, supported standard for NGINX on Kubernetes. The clarity and stability gained from the VirtualServer resource alone make the effort worthwhile, ensuring your cluster remains secure and performant for years to come.
Top comments (0)