DEV Community

Python-T Point
Python-T Point

Posted on • Originally published at pythontpoint.in

๐Ÿ’ป How to vm migrate from vmware to kvm โ€” key tips and pitfalls

Two virtual machines, identical in configuration and OS, migrated from VMware to KVM using different tools: one completes in 22 minutes with full network functionality; the other fails after 45 minutes with a kernel panic. Same hypervisor destination. Same source vCenter. Same guest OS. The difference? Whether virt-v2v was used โ€” or avoided. If you need to vm migrate from vmware to kvm , this tool isnโ€™t optional. Itโ€™s the only method that consistently produces bootable, production-ready KVM guests.

๐Ÿ“‘ Table of Contents

  • ๐Ÿš€ Prerequisites โ€” What You Need Before Running virt-v2v
  • ๐Ÿ” Access to VMware
  • ๐Ÿ’พ Destination Options
  • ๐Ÿงฉ Supported Guest OSes
  • ๐Ÿ”Œ Connection โ€” How virt-v2v Talks to VMware
  • ๐Ÿ’ฝ Conversion โ€” What Happens During the Transformation
  • ๐Ÿ”ง Windows-Specific Changes
  • ๐Ÿ“ฆ Output Formats
  • ๐Ÿšซ Common Conversion Failures
  • ๐Ÿ“ค Deployment โ€” Getting the VM to KVM Efficiently
  • ๐Ÿ”— Network Configuration
  • ๐Ÿ” Post-Migration Checks
  • ๐ŸŸฉ Final Thoughts
  • โ“ Frequently Asked Questions
  • Can I convert VMs without powering them off?
  • Does virt-v2v support encrypted VMware VMs?
  • Can I automate migration of multiple VMs?
  • ๐Ÿ“š References & Further Reading

๐Ÿš€ Prerequisites โ€” What You Need Before Running virt-v2v

virt-v2v is not a standalone binary. It's a pipeline built on libvirt, QEMU, and libguestfs. You must run it from a Linux conversion host capable of connecting to both VMware (via vCenter or ESXi) and the destination KVM environment.

The host requires:

  • libvirt with QEMU/KVM driver
  • virt-v2v (part of the virt-v2v package on most distributions)
  • qemu-img for intermediate disk handling
  • Network access to vCenter/ESXi and destination KVM host
  • Sufficient scratch space โ€” at least 1.5ร— the size of the largest VM being converted

On Red Hatโ€“based systems (RHEL, Rocky Linux, AlmaLinux):

$ sudo dnf install virt-v2v libguestfs-tools-c qemu-img
Enter fullscreen mode Exit fullscreen mode

Expected output:

Installed:
  virt-v2v-1.4.6-1.el9.x86_64
  libguestfs-1:1.48.20-1.el9.x86_64
  qemu-img-6.2.0-30.el9_3.1.x86_64
Complete!
Enter fullscreen mode Exit fullscreen mode

Under the hood, virt-v2v uses libguestfs to launch a minimal appliance via guestfsd. This mounts the source VM's filesystem to perform targeted modifications: removing VMware-specific drivers like vmxnet3, injecting KVM equivalents (virtio_net, virtio_blk), and rewriting bootloader configuration. This is not a blind disk copy โ€” itโ€™s a guest-aware transformation.

๐Ÿ” Access to VMware

virt-v2v uses URIs to connect to VMware:

  • vpx:// โ€” for vCenter-managed clusters
  • esx:// โ€” for standalone ESXi hosts

Youโ€™ll need:

  • vCenter or ESXi hostname/IP
  • Username with read-only VM privileges
  • Password (or keyring integration)
  • Source VM name or inventory path

๐Ÿ’พ Destination Options

Output formats include:

  • Local libvirt storage pool (-o libvirt)
  • Remote KVM host via SSH (-oo libvirt_uri=qemu+ssh://โ€ฆ)
  • Raw file output (-o null -os /path/to/output)

The most common production setup uses qemu+ssh to stream the VM directly to a remote KVM host.

๐Ÿงฉ Supported Guest OSes

virt-v2v officially supports:

  • RHEL/CentOS 6โ€“9
  • Debian 10โ€“12
  • Ubuntu 18.04โ€“22.04
  • Windows Server 2008โ€“2022 (requires virtio-win drivers)

Unsupported or legacy distributions may boot, but often fail at initramfs or driver loading without manual fixes.


๐Ÿ”Œ Connection โ€” How virt-v2v Talks to VMware

virt-v2v connects directly to the VMware vSphere API over HTTPS. No manual OVA export is required.

Example command:

$ virt-v2v -ic vpx://vcenter.example.com/Datacenter/host/Cluster \
  -it vddk -ip esx_password \
  'Windows-VM'
Enter fullscreen mode Exit fullscreen mode

Breakdown:

  • -ic: input connection URI
  • -it vddk: enables VMwareโ€™s Virtual Disk Development Kit (VDDK)
  • -ip: prompts for password (prefer over plaintext)
  • 'Windows-VM': VM name as registered in vCenter

VDDK enables hot disk reading via VMwareโ€™s VixDiskLib , allowing direct access to .vmdk files on ESXi datastores โ€” even while the VM is running. Without VDDK, virt-v2v falls back to NBD or HTTPS transport, which are 3โ€“5ร— slower and require the VM to be powered off.

Expected output snippet:

[   0.0] Opening the source -i libvirt -ic vpx://...
[   2.1] Creating an overlay to protect the source from being modified
[   3.5] Opening the overlay
[  10.2] Inspecting the overlay
[  15.0] Checking for sufficient free disk space in the overlay
[  15.1] Converting Windows-VM to run on KVM
[  16.0] Creating output metadata
Enter fullscreen mode Exit fullscreen mode

VDDK requires the VDDK library installed on the conversion host. Download from VMware and extract:

$ tar -xzf VMware-vix-disklib-*.tar.gz -C /opt
$ virt-v2v ... -oo vddk-libdir=/opt/vmware-vddk/lib64
Enter fullscreen mode Exit fullscreen mode

This library path must point to the lib64 directory containing libvixDiskLib.so. For production migrations, VDDK is non-negotiable โ€” skipping it increases transfer time and requires downtime.


๐Ÿ’ฝ Conversion โ€” What Happens During the Transformation

virt-v2v performs a deep guest reconfiguration, not a simple format swap. The process includes:

1. Disk download via VDDK โ†’ temporary qcow2 overlay

2. Guest inspection : reads /etc/os-release, bootloader, partitioning

3. Driver substitution : replaces vmxnet3 with virtio_net, pvscsi with virtio_scsi

4. Bootloader update : GRUB config rewritten for virtio block devices

5. Initramfs rebuild : dracut or update-initramfs regenerates with virtio modules

6. Disk export : final image pushed to target storage

For a Linux VM named webserver-01:

$ virt-v2v -ic vpx://vcenter.example.com/Datacenter/host/Cluster \
  -oo vddk-libdir=/opt/vmware-vddk/lib64 \
  -o libvirt -os default \
  'webserver-01'
Enter fullscreen mode Exit fullscreen mode

Output:

[  50.2] Creating local storage path for the converted disk
[  51.0] Creating qcow2 disk (for libvirt) with size 21.5G
[  60.3] Setting a random seed for the new guest
[  61.5] Changing the root password
[  65.0] Installing virtio drivers (Linux)
[  68.2] Rewriting GRUB configuration
[  70.1] Updating initramfs
[  75.4] Building the libvirt XML
[  76.0] Creating libvirt domain...
Domain created successfully.
Enter fullscreen mode Exit fullscreen mode

The initramfs rebuild is critical. If virtio_blk is absent during early boot, the kernel cannot detect the root device and will panic with:

"ALERT! /dev/sda1 does not exist. Dropping to a shell."

virt-v2v avoids this by chroot-ing into the guest disk and running:

  • dracut -add-drivers virtio_pci,virtio_blk,virtio_net (RHEL/CentOS)
  • update-initramfs -u (Debian/Ubuntu)

This ensures the initramfs contains the drivers needed before the real root mounts.

virt-v2v doesnโ€™t just move a VM โ€” it replatforms it, ensuring kernel, bootloader, and drivers align with KVMโ€™s virtual hardware.

๐Ÿ”ง Windows-Specific Changes

For Windows VMs, virt-v2v injects virtio-win drivers into the offline registry using guestfs_win_inject_drivers(). This adds:

  • viostor (virtio block)
  • vioscsi (virtio SCSI)
  • viorng (entropy)
  • qemu-ga (optional)

And sets each service Start value to 0 (boot time load) in:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\

The virtio-win.iso must be accessible:

$ virt-v2v ... -oo virtio-win-iso=/home/user/virtio-win.iso 'WinServer-2019'
Enter fullscreen mode Exit fullscreen mode

Without this, Windows fails to detect the boot disk and blue-screens.

๐Ÿ“ฆ Output Formats

Default output is qcow2 with sparse allocation. To use raw:

$ virt-v2v ... -of raw
Enter fullscreen mode Exit fullscreen mode

Raw is preferred for LVM, iSCSI, or direct device mapping. qcow2 supports snapshots and compression, but adds minor I/O overhead.

๐Ÿšซ Common Conversion Failures

  • " No OS found": guest OS not in supported list, or /etc/os-release missing/corrupted
  • dracut-initqueue timeout : virtio_blk missing from initramfs (often due to chroot failure in scratch space)
  • No network post-boot : vmxnet3 driver not replaced, or 70-persistent-net.rules locks old MAC

Validate OS compatibility against the official list before starting.


๐Ÿ“ค Deployment โ€” Getting the VM to KVM Efficiently

After conversion, deploy the VM to KVM. The default -o libvirt registers it locally. For remote deployment:

$ virt-v2v -ic vpx://vcenter.example.com/... \
  -o null -os /var/lib/libvirt/images \
  -oo output_mode=local \
  -oo libvirt_uri=qemu+ssh://kvmhost.example.com/system \
  'webserver-01'
Enter fullscreen mode Exit fullscreen mode

This configuration:

  • -o null: skips local libvirt registration
  • -os /var/lib/libvirt/images: writes disk to local scratch
  • -oo libvirt_uri=โ€ฆ: connects to remote libvirtd over SSH
  • Then uses scp to transfer disk, virDomainDefineXML() to define domain

This avoids double-transfer of large disks โ€” a key efficiency when migrating dozens of VMs.

On the target KVM host:

$ virsh list --all


 Id   Name             State
----------------------------------
 3    webserver-01     running
Enter fullscreen mode Exit fullscreen mode

Check disk format:

$ qemu-img info /var/lib/libvirt/images/webserver-01-sda


image: webserver-01-sda
file format: qcow2
virtual size: 50 GiB
disk size: 14.2 GiB
backing file: (none)
cluster_size: 65536
Format specific details:
    compat: 1.1
    lazy refcounts: false
Enter fullscreen mode Exit fullscreen mode

The disk size is much smaller than virtual size due to sparse allocation โ€” the file only consumes space for written blocks.

๐Ÿ”— Network Configuration

virt-v2v preserves NIC count and MAC addresses, but changes interface type from vmxnet3 to virtio. Ensure the KVM bridge (e.g., br0) is active and bridged to physical NIC.

If no IP is assigned:

  • Verify the bridge: ip link show br0
  • Check libvirt network: virsh net-list
  • Confirm firewall allows traffic on bridge interface

๐Ÿ” Post-Migration Checks

After boot:

  • ip a โ€” confirm interface (e.g., ens3) has link and correct IP
  • dmesg | grep -i virtio โ€” verify virtio_net, virtio_blk loaded
  • lsmod | grep -E "(vmxnet3|vmmouse)" โ€” ensure VMware drivers are absent
  • Test SSH, service uptime, and baseline performance

At this point, the vm migrate from vmware to kvm process is complete โ€” with a fully operational guest.


๐ŸŸฉ Final Thoughts

Migrating VMs from VMware to KVM is more than a cost play โ€” it's about adopting open, auditable infrastructure. virt-v2v enables this transition not through brute-force copying, but by integrating deeply with libvirt, QEMU, and libguestfs to transform guest configuration at the kernel level.

The tool doesnโ€™t abstract complexity โ€” it applies it correctly. Youโ€™re not relocating a VM; youโ€™re converting its hardware identity from VMware to KVM. That involves device drivers, initramfs, bootloader logic, and registry entries on Windows. Skipping this (e.g., using qemu-img convert) results in boot failures, undetected disks, or degraded I/O.

When you vm migrate from vmware to kvm using virt-v2v, the result isnโ€™t a ported VM โ€” itโ€™s a native one, indistinguishable from a guest installed directly on KVM.

โ“ Frequently Asked Questions

Can I convert VMs without powering them off?

Yes, with VDDK. The VMware Virtual Disk Development Kit allows hot reading of .vmdk files, so the source VM can stay powered on during migration. However, only data present at the start of the transfer is captured unless application-consistent snapshots are used.

Does virt-v2v support encrypted VMware VMs?

No. VMware VM encryption (VMCE) is not supported by VDDK in offline mode. The VM must be decrypted in vCenter before conversion.

Can I automate migration of multiple VMs?

Yes. Use the vSphere API or vim-cmd to enumerate VMs, then script virt-v2v calls in a loop. Pair with SSH key authentication and shared storage (e.g., NFS) for efficient, scalable migrations.

๐Ÿ“š References & Further Reading

  • VMware VDDK documentation โ€” API and deployment guide for high-speed disk access: docs.vmware.com

Top comments (1)

Collapse
 
ggle_in profile image
HARD IN SOFT OUT

This guide is full of expensive lessons that most teams learn only after a failed migration weekend. The preโ€‘flight checklist alone belongs in every infra teamโ€™s wiki.

My biggest headache wasnโ€™t the migration itselfโ€”it was the rollback. When something broke halfway, we needed a clean snapshot without data orphans. Do you have a specific strategy for making migration steps idempotent so they can be safely retried?

Wrap everything in a declarative playbook (Ansible or Terraform) that migrates a single โ€œcanaryโ€ VM first, runs automated tests, then rolls the rest. Your excellent manual guide could become a repeatable factory.