Kubernetes Networking Is Changing
Kubernetes v1.35 marks an important turning point for cluster networking. The IPVS backend for kube-proxy has been officially deprecated, and future Kubernetes releases will remove it entirely. If your clusters still rely on IPVS, the clock is now very much ticking.
Staying on IPVS is not just a matter of running older technology. As upstream support winds down, IPVS receives less testing, fewer fixes, and less attention overall. Over time, this increases the risk of subtle breakage, makes troubleshooting harder, and limits compatibility with newer Kubernetes networking features. Eventually, upgrading Kubernetes will force a migration anyway, often at the worst possible time.
Migrating sooner gives you control over the timing, space to test properly, and a chance to avoid turning a routine upgrade into an emergency networking change.
Calico and the Path Forward
Project Calico’s unique design with a pluggable data plane architecture is what makes this transition possible without redesigning cluster networking from scratch. Calico supports a wide range of technologies, including eBPF, iptables, IPVS, Windows HNS, VPP, and nftables, allowing clusters to choose the most appropriate backend for their environment. This flexibility enables clusters to evolve alongside Kubernetes rather than being locked into a single implementation.
When Tigera added IPVS support in 2019, it addressed real scalability limitations in iptables and became the right choice for many large clusters. Today, that same flexibility provides a clean and fully supported path forward as Kubernetes standardizes on nftables.
Why a Migration?
To understand why this shift is happening, we need to look at the evolution of the Linux networking stack:
Iptables: While highly reliable, iptables suffered from a global lock bottleneck and O(N) complexity. In large clusters, even a simple rule update required reloading the entire ruleset, causing massive latency and high CPU usage.
The Fix (IPVS): IPVS was adopted to solve these scaling issues. It offered O(1) matching performance using hashmaps, making it much faster for services. However, it required maintaining a completely separate kernel subsystem, creating significant technical debt and a lot of work to achieve feature parity.
The Future (NFTables): nftables is the modern successor to both its predecessors. It combines the performance benefits of IPVS (fast, scalable packet classification) with the flexibility of iptables, all within a unified, modern kernel API.
Learn more about this shift by watching How the Tables Have Turned: Kubernetes Says Goodbye to Iptables.
Migration Guide – Prerequisites
Before starting the migration, ensure your environment meets the following requirements. This helps guarantee a smooth transition and reduces the risk of service disruption:
- Linux Kernel: Should be compiled with nftables support.
- Kubernetes: v1.31 or higher
Recommended: Perform networking backend change during a maintenance window. 
Note: Nftables is widely supported across modern Linux distributions, so most clusters should already meet these prerequisites.
Verify The Current Mode
Before making any changes to kube-proxy or Calico, it’s important to confirm whether your cluster is currently running in IPVS mode. Knowing the current mode helps you plan the migration accurately and ensures that Calico is behaving as expected.
To confirm if your cluster is currently in IPVS mode, check the kube-proxy logs:
kubectl logs -n kube-system daemonset/kube-proxy | grep -i ipvs
You should see output like:
I0103 01:18:49.979100 1 server_linux.go:253] "Using ipvs Proxier"
In Kubernetes v1.35+, you may also see this deprecation message:
"The ipvs proxier is now deprecated and may be removed in a future release. Please use 'nftables' instead."
If your environment is set to IPVS then Calico automatically switches to its IPVS mode and utilizes IPVS based service creation to gain better performance.
You can verify this by using the following command:
kubectl logs -n calico-system daemonset/calico-node | grep -i ipvs
Example output:
2026-01-03 03:09:52.996 [INFO][71] felix/driver.go 85: Kube-proxy in ipvs mode, enabling felix kube-proxy ipvs support.
Migrate Kube-Proxy to NFTables
As shown in the previous log emitted by kube-proxy, the upstream Kubernetes recommendation is to switch from IPVS to nftables.
Update the ConfigMap
You need to update the mode parameter in the kube-proxy ConfigMap.
kubectl edit configmap -n kube-system kube-proxy
Locate the mode configuration (usually found within the config.conf data block) and change it from ipvs to nftables:
mode: nftables
Restart Kube-Proxy
Changes to the ConfigMap do not apply automatically. You must restart the DaemonSet to pick up the changes.
kubectl rollout restart -n kube-system daemonset/kube-proxy
Verify Kube-Proxy Migration
Once the pods restart, check the logs to confirm the new mode is active:
kubectl logs -n kube-system daemonset/kube-proxy | grep -i nftables
Switch Calico to NFTables
After updating kube-proxy, you must instruct the Calico data plane to switch to NFTables mode. This is done by patching the Tigera Operator’s installation resource.
Step 1: Patch the Installation
Run the following command to update the Linux data plane mode:
kubectl patch installation default --type=merge -p '{"spec":{"calicoNetwork":{"linuxDataplane":"Nftables"}}}'
Step 2: Verify Calico Migration
The Tigera operator will initiate a rolling restart of all calico-node pods. Once complete, verify the change in the logs:
kubectl logs -f -n calico-system daemonset/calico-node | grep -i nftables
Output:
2026-01-03 01:25:07.803 [INFO][837] felix/config_params.go 805: Parsed value for NFTablesMode: Enabled (from datastore (global))
Switch to Calico eBPF (High Performance)
For even higher performance, consider migrating to the Calico eBPF data plane, which bypasses kube-proxy entirely and provides:
- Lower latency than both
IPVS. - Source IP preservation.
- Direct Server Return (DSR) capabilities.
Note: Make sure to change your kube-proxy mode to iptables before switching to eBPF.
Learn more about the Calico eBPF data plane in the video Replace Kubernetes kube-proxy now!
Next Steps for a Future-Proof Cluster
Migrating away from IPVS removes the technical debt of a deprecated backend and positions your cluster for stable, secure, and predictable networking. Whether you follow the standard nftables path or take the performance leap with the Calico eBPF data plane, your cluster will be ready for the next generation of Kubernetes networking.
Taking these steps ensures your cluster isn’t just compliant with Kubernetes v1.35, it’s also optimized for performance, reliability, and growth.
Join the Conversation!
Join the Project Calico Slack to ask questions, share experiences, and learn from other operators navigating these networking changes.
Further Reading & Resources
For additional context on Calico’s nftables support and high-performance networking options, check out these resources:
- Install Calico (v3.30+): Requires an existing working cluster. Click to learn how to install.
- What’s New in Calico v3.31: eBPF, NFTables, and More — Overview of nftables adoption and Calico enhancements.
- High-Performance Kubernetes Networking with Calico eBPF — Learn how to migrate to the eBPF data plane for maximum performance.
- Calico: The Hard Way — Deep dive into Calico networking internals and Kubernetes integration.
The post From IPVS to NFTables: A Migration Guide for Kubernetes v1.35 appeared first on Tigera - Creator of Calico.
Top comments (0)