Organisations are re-evaluating their VM infrastructure. The economics have shifted, the tooling has matured, and the case for running two separate platforms, one for containers, one for VMs, is getting harder to justify. Platform teams that spent years managing hypervisor infrastructure are being asked to consolidate, and most are landing on the same answer: Kubernetes.
KubeVirt makes running VMs on Kubernetes possible. But KubeVirt networking – what happens to a VM’s IP address, VLAN, and security posture when it lands in a cluster – is where most migration plans hit a wall. The reasons go beyond cost:
- Most enterprises already run Kubernetes. Containers are already there. Adding VMs to the same platform consolidates tooling, lifecycle management, networking models, and security policy into a single operational model.
- Two platforms means double the overhead. Separate infrastructure means separate upgrade cycles, separate monitoring, separate network configuration, and separate on-call runbooks. Platform consolidation has direct operational value.
- Kubernetes is mature enough. KubeVirt has reached the point where it’s a viable production choice for enterprise VM workloads.
The decision to migrate is being made. The question is how to do it without causing chaos.
Introducing KubeVirt
KubeVirt extends the Kubernetes API with new custom resource types: VirtualMachine and VirtualMachineInstance. These make VMs first-class Kubernetes objects — scheduled, managed, and observable through the same tools and APIs as containers.
A VM running in KubeVirt runs inside a virt-launcher pod. Kubernetes schedules that pod to a node with available resources, the same way it schedules any other workload. The VM gets CPU and memory from the node. It doesn’t know it moved.
That’s the point: from the VM’s perspective, KubeVirt is invisible. The operating system keeps running. The application keeps running.

KubeVirt virt-launcher pods in Kubernetes
The network is a different story
When you migrate a VM, three things have to follow: compute, storage, and network. Compute and storage are properties of the VM itself — self-contained. KubeVirt handles them by giving the VM a new host and a new storage backend. The VM doesn’t notice.
| Dependency | What KubeVirt Does | Status |
| Compute ** | VM runs in a virt-launcher pod. Kubernetes schedules it. | Solved |
| *Storage * | Disk images mapped to Persistent Volumes via migration tools. | Solved |
| **Network | VM gets a new IP from the Kubernetes pod CIDR. | Note Solved |
Dependencies of VM migrations
The network is different. The network isn’t a property of the VM. It’s a property of the VM’s relationship to everything else in the infrastructure.
A VM’s compute dependency is between the VM and its host. A VM’s storage dependency is between the VM and a storage backend. But a VM’s network dependency is between the VM and every other system that knows how to reach it.
That distinction is why networking is where VM migrations stall. This isn’t theoretical. KubeVirt’s own issue tracker documents the problem directly: a user reported their VM’s IP changing after live migration, and a project maintainer confirmed: “Sticky IPs is not implemented.” The network identity doesn’t follow the VM by default.

Lift-and-Shift VMs to Kubernetes with Calico L2 Bridge Networks
Why default KubeVirt networking breaks VM migrations
When a VM lands in Kubernetes using default pod networking:
- It receives a new IP address from the cluster’s pod CIDR. A range that exists only inside the cluster
- The original VLAN doesn’t exist inside the cluster. Kubernetes has no native VLAN concept in default networking
- Pod IPs are only meaningful inside the cluster. The upstream network has no direct visibility into them
From the perspective of every system that previously knew the VM by its address, the VM has disappeared. Something with an unfamiliar IP has appeared inside a cluster that the upstream infrastructure can’t see into.
A VM’s IP address accumulates dependencies over time. By the time you’re migrating it, that IP is embedded in:
- Firewall rules — security teams wrote rules allowing or denying traffic to that specific address.
- DNS records — the hostname resolves to that IP.
- DHCP configuration — the IP is reserved for that VM’s MAC address.
- Monitoring and alerting — observability tools are configured to watch that address.
- Load balancer backends — upstream load balancers route traffic to that IP.
- Application configuration files — other services have that IP hardcoded.
- Compliance and audit documentation — security posture records reference that IP in that VLAN.
VLANs add another dimension. In enterprise environments, VLANs aren’t just a way to segment traffic, they’re security boundaries, designed and owned by the security team. Firewall rules are built around VLAN membership. Compliance frameworks reference VLAN placement. When the VM moves to Kubernetes with default networking, that VLAN disappears. The security boundary is gone.
None of this travels with the VM automatically. And every broken dependency requires a different team to fix it.
You can see this directly. Running kubectl exec into the virt-launcher pod of a migrated VM shows the interfaces KubeVirt creates with default pod networking:
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450
inet 10.60.141.196/32 scope global eth0
3: k6t-eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450
inet 10.0.2.1/24 scope global k6t-eth0
4: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master k6t-eth0
eth0 is a Calico-assigned pod CIDR address — meaningful only inside the cluster. k6t-eth0 is KubeVirt’s internal masquerade bridge. tap0 connects to the VM’s virtual NIC. The VM’s original IP is gone. The upstream network sees 10.60.141.196, not the address any firewall rule, DNS record, or application config was written for.
A lift-and-shift becomes a multi-team project
Here’s what was planned: the platform team moves the VM. One team. The migration is invisible to the rest of the business.
Here’s what actually happens with default pod networking:
- The IP changes. The network team needs to rewrite firewall rules and update DNS
- The VLAN disappears. The security team needs to review the new network placement and approve it
- Application config breaks. The application team needs to update config files and hardcoded references
Every one of these requires sign-offs, tickets, and coordination
A migration budgeted as a lift-and-shift gets delivered as a network redesign. Per VM. At scale, the coordination cost makes migration impractical.
This is where VM migration to Kubernetes stalls, not because the technology doesn’t work, but because the organisational cost exceeds what anyone planned or funded for.
How to preserve VM IP addresses and VLANs in Kubernetes
Think about what the problem really is. The VM had a home on the network. A specific IP, a specific VLAN, a specific place in the security model. When it moved to Kubernetes, that home disappeared. Default pod networking gave it a new address in a new network that nothing outside the cluster knows about.
Calico L2 Bridge Networks solve this by doing the opposite. Calico L2 Bridge Networks extend a VM’s original Layer 2 network segment – including its IP address, VLAN, and MAC address – directly into a Kubernetes cluster via a node-level bridge, so the VM’s network identity survives the migration unchanged. Instead of putting the VM on Kubernetes’s network, it brings the VM’s original network into Kubernetes. The physical VLAN the VM lived on gets extended directly into the cluster via a bridge on the node. The VM connects to that bridge through a secondary interface, and the VM preserves its original IP address, the same VLAN, and the same MAC address it had before the migration.
Nothing on the outside knows anything has changed. The firewall still talks to the same IP. DNS still resolves to the right place. The monitoring dashboard still shows the right host. The application that had the IP hardcoded still connects. The security team’s VLAN boundary still exists — it just now exists inside Kubernetes too.

L2 Bridge Mode with Calico by Tigera
You can see the difference at the interface level. With Calico L2 Bridge, that same virt-launcher pod now looks like this:
2: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450
inet 10.60.141.196/32 scope global eth0
3: k6t-eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450
inet 10.0.2.1/24 scope global k6t-eth0
4: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 master k6t-eth0
5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
link/ether 52:54:00:3a:7f:21 brd ff:ff:ff:ff:ff:ff
inet 10.10.5.42/24 brd 10.10.5.255 scope global net1
net1 is the secondary interface connected to the L2 bridge that Calico manages on the node. That’s the VM’s original IP10.10.5.42, on its original subnet, with its original MAC address. The pod-side interfaces are still there, KubeVirt still needs them, but the VM’s actual network identity is preserved on net1. That’s the interface the rest of your infrastructure talks to.
Why a secondary interface and not the primary?
KubeVirt manages the VM’s primary network interface through the virt-launcher pod. That primary interface has two modes: masquerade and bridge. Masquerade NATs all VM traffic through the pod’s IP. The VM is hidden behind the pod address. Bridge mode connects the VM to the pod network bridge. Closer, but still the pod network, not your VLAN.
Neither mode has a way to extend an external VLAN directly to the VM. They’re designed for pod networking, not for preserving legacy network identity.
The secondary interface is what makes this work. Calico attaches an additional interface to the VM and that interface connects to the bridge Calico created on the node, which connects to the trunk carrying your VLAN from the physical switch. The VM’s traffic on that interface goes directly to the right network segment without any translation or tunnelling.
How Calico sets it up
The setup is declarative. You define what you want, Calico handles the plumbing.
You create a network resource in Kubernetes that tells Calico which VLAN to bridge and how to map it. Calico reads that and creates the bridge on the node automatically, attaches the trunk interface, and starts tracking the VM’s IP. A NetworkAttachmentDefinition tells KubeVirt to attach the secondary interface at boot. The VirtualMachine spec references the secondary network, and when the VM starts, net1 appears with the right IP.
Migration tools like Forklift (for OpenShift Virtualisation) handle the mapping of existing VM interfaces to the cluster definitions and register the VM’s IP with Calico before migration. From that point, Calico owns the IP, tracking it, keeping routing state correct, and following the VM if it moves between nodes.
Multiple VLANs can run through the same trunk-backed bridge. You don’t need separate infrastructure per VLAN, the same bridge handles them all.
What you gain after the migration
Getting the VM into Kubernetes without breaking anything is the primary goal. But once it’s there, a few things become available that weren’t possible in the hypervisor environment.
Network visibility
In a traditional hypervisor setup, getting visibility into what a VM is actually doing on the network usually means deploying a separate agent, a network tap, or a dedicated monitoring tool per host. That visibility comes with the unified platform that Calico provides. Calico gives you traffic flow data, communication patterns, and network behaviour for VM interfaces without anything extra to install or manage.
Security policy you can actually version control
The firewall rules that protected this VM before migration were probably sitting in a security team’s ticketing system, applied manually to a physical or virtual firewall. They worked, but they weren’t portable, they weren’t reviewable in a pull request, and they weren’t easy to audit.
With Calico, you can express the same security posture as Kubernetes-native network policy. Labels, selectors, declarative YAML. You don’t have to do this immediately as part of the migration. The VLAN boundary still exists, the existing firewall rules still apply. But when the security team is ready to modernise the policy model, the tooling is already there.
Live migration that doesn’t touch the network
Once a VM is running in Kubernetes, it can move between nodes for patching, rebalancing, hardware failures, and the network configuration moves with it. Calico tracks the IP and updates routing state automatically. From the outside, nothing changes. The VM is just on a different node now.
Making VM migration to Kubernetes practical
Migration projects fail when the platform team scopes a job as “move the VM” and it turns into “rebuild the network.” That scope creep isn’t a technical failure, it’s what happens when you use a networking model designed for stateless containers to move workloads that were designed around stable, long-lived network identities.
Calico L2 Bridge Networks solve the right problem: keep the network identity intact during the move, let the migration stay within the platform team’s remit, and leave modernisation for when it’s actually planned and funded.
Move now. Modernise later. On your own timeline.
Watch our walkthrough to learn more: Calico L2 Bridge Networking for Virtual Machines
The post KubeVirt Networking: How to Preserve VM IP Addresses During Migration appeared first on Tigera – Creator of Calico.
Top comments (0)