Two virtual machines, identical in configuration and OS, migrated from VMware to KVM using different tools: one completes in 22 minutes with full network functionality; the other fails after 45 minutes with a kernel panic. Same hypervisor destination. Same source vCenter. Same guest OS. The difference? Whether virt-v2v was used โ or avoided. If you need to vm migrate from vmware to kvm , this tool isnโt optional. Itโs the only method that consistently produces bootable, production-ready KVM guests.
๐ Table of Contents
- ๐ Prerequisites โ What You Need Before Running virt-v2v
- ๐ Access to VMware
- ๐พ Destination Options
- ๐งฉ Supported Guest OSes
- ๐ Connection โ How virt-v2v Talks to VMware
- ๐ฝ Conversion โ What Happens During the Transformation
- ๐ง Windows-Specific Changes
- ๐ฆ Output Formats
- ๐ซ Common Conversion Failures
- ๐ค Deployment โ Getting the VM to KVM Efficiently
- ๐ Network Configuration
- ๐ Post-Migration Checks
- ๐ฉ Final Thoughts
- โ Frequently Asked Questions
- Can I convert VMs without powering them off?
- Does virt-v2v support encrypted VMware VMs?
- Can I automate migration of multiple VMs?
- ๐ References & Further Reading
๐ Prerequisites โ What You Need Before Running virt-v2v
virt-v2v is not a standalone binary. It's a pipeline built on libvirt, QEMU, and libguestfs. You must run it from a Linux conversion host capable of connecting to both VMware (via vCenter or ESXi) and the destination KVM environment.
The host requires:
- libvirt with QEMU/KVM driver
-
virt-v2v (part of the
virt-v2vpackage on most distributions) - qemu-img for intermediate disk handling
- Network access to vCenter/ESXi and destination KVM host
- Sufficient scratch space โ at least 1.5ร the size of the largest VM being converted
On Red Hatโbased systems (RHEL, Rocky Linux, AlmaLinux):
$ sudo dnf install virt-v2v libguestfs-tools-c qemu-img
Expected output:
Installed:
virt-v2v-1.4.6-1.el9.x86_64
libguestfs-1:1.48.20-1.el9.x86_64
qemu-img-6.2.0-30.el9_3.1.x86_64
Complete!
Under the hood, virt-v2v uses libguestfs to launch a minimal appliance via guestfsd. This mounts the source VM's filesystem to perform targeted modifications: removing VMware-specific drivers like vmxnet3, injecting KVM equivalents (virtio_net, virtio_blk), and rewriting bootloader configuration. This is not a blind disk copy โ itโs a guest-aware transformation.
๐ Access to VMware
virt-v2v uses URIs to connect to VMware:
- vpx:// โ for vCenter-managed clusters
- esx:// โ for standalone ESXi hosts
Youโll need:
- vCenter or ESXi hostname/IP
- Username with read-only VM privileges
- Password (or keyring integration)
- Source VM name or inventory path
๐พ Destination Options
Output formats include:
- Local libvirt storage pool (
-o libvirt) - Remote KVM host via SSH (
-oo libvirt_uri=qemu+ssh://โฆ) - Raw file output (
-o null -os /path/to/output)
The most common production setup uses qemu+ssh to stream the VM directly to a remote KVM host.
๐งฉ Supported Guest OSes
virt-v2v officially supports:
- RHEL/CentOS 6โ9
- Debian 10โ12
- Ubuntu 18.04โ22.04
- Windows Server 2008โ2022 (requires
virtio-windrivers)
Unsupported or legacy distributions may boot, but often fail at initramfs or driver loading without manual fixes.
๐ Connection โ How virt-v2v Talks to VMware
virt-v2v connects directly to the VMware vSphere API over HTTPS. No manual OVA export is required.
Example command:
$ virt-v2v -ic vpx://vcenter.example.com/Datacenter/host/Cluster \
-it vddk -ip esx_password \
'Windows-VM'
Breakdown:
-
-ic: input connection URI -
-it vddk: enables VMwareโs Virtual Disk Development Kit (VDDK) -
-ip: prompts for password (prefer over plaintext) -
'Windows-VM': VM name as registered in vCenter
VDDK enables hot disk reading via VMwareโs VixDiskLib , allowing direct access to .vmdk files on ESXi datastores โ even while the VM is running. Without VDDK, virt-v2v falls back to NBD or HTTPS transport, which are 3โ5ร slower and require the VM to be powered off.
Expected output snippet:
[ 0.0] Opening the source -i libvirt -ic vpx://...
[ 2.1] Creating an overlay to protect the source from being modified
[ 3.5] Opening the overlay
[ 10.2] Inspecting the overlay
[ 15.0] Checking for sufficient free disk space in the overlay
[ 15.1] Converting Windows-VM to run on KVM
[ 16.0] Creating output metadata
VDDK requires the VDDK library installed on the conversion host. Download from VMware and extract:
$ tar -xzf VMware-vix-disklib-*.tar.gz -C /opt
$ virt-v2v ... -oo vddk-libdir=/opt/vmware-vddk/lib64
This library path must point to the lib64 directory containing libvixDiskLib.so. For production migrations, VDDK is non-negotiable โ skipping it increases transfer time and requires downtime.
๐ฝ Conversion โ What Happens During the Transformation
virt-v2v performs a deep guest reconfiguration, not a simple format swap. The process includes:
1. Disk download via VDDK โ temporary qcow2 overlay
2. Guest inspection : reads /etc/os-release, bootloader, partitioning
3. Driver substitution : replaces vmxnet3 with virtio_net, pvscsi with virtio_scsi
4. Bootloader update : GRUB config rewritten for virtio block devices
5. Initramfs rebuild : dracut or update-initramfs regenerates with virtio modules
6. Disk export : final image pushed to target storage
For a Linux VM named webserver-01:
$ virt-v2v -ic vpx://vcenter.example.com/Datacenter/host/Cluster \
-oo vddk-libdir=/opt/vmware-vddk/lib64 \
-o libvirt -os default \
'webserver-01'
Output:
[ 50.2] Creating local storage path for the converted disk
[ 51.0] Creating qcow2 disk (for libvirt) with size 21.5G
[ 60.3] Setting a random seed for the new guest
[ 61.5] Changing the root password
[ 65.0] Installing virtio drivers (Linux)
[ 68.2] Rewriting GRUB configuration
[ 70.1] Updating initramfs
[ 75.4] Building the libvirt XML
[ 76.0] Creating libvirt domain...
Domain created successfully.
The initramfs rebuild is critical. If virtio_blk is absent during early boot, the kernel cannot detect the root device and will panic with:
"ALERT! /dev/sda1 does not exist. Dropping to a shell."
virt-v2v avoids this by chroot-ing into the guest disk and running:
-
dracut -add-drivers virtio_pci,virtio_blk,virtio_net(RHEL/CentOS) -
update-initramfs -u(Debian/Ubuntu)
This ensures the initramfs contains the drivers needed before the real root mounts.
virt-v2v doesnโt just move a VM โ it replatforms it, ensuring kernel, bootloader, and drivers align with KVMโs virtual hardware.
๐ง Windows-Specific Changes
For Windows VMs, virt-v2v injects virtio-win drivers into the offline registry using guestfs_win_inject_drivers(). This adds:
-
viostor(virtio block) -
vioscsi(virtio SCSI) -
viorng(entropy) -
qemu-ga(optional)
And sets each service Start value to 0 (boot time load) in:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\
The virtio-win.iso must be accessible:
$ virt-v2v ... -oo virtio-win-iso=/home/user/virtio-win.iso 'WinServer-2019'
Without this, Windows fails to detect the boot disk and blue-screens.
๐ฆ Output Formats
Default output is qcow2 with sparse allocation. To use raw:
$ virt-v2v ... -of raw
Raw is preferred for LVM, iSCSI, or direct device mapping. qcow2 supports snapshots and compression, but adds minor I/O overhead.
๐ซ Common Conversion Failures
-
" No OS found": guest OS not in supported list, or
/etc/os-releasemissing/corrupted -
dracut-initqueue timeout :
virtio_blkmissing from initramfs (often due to chroot failure in scratch space) -
No network post-boot :
vmxnet3driver not replaced, or70-persistent-net.ruleslocks old MAC
Validate OS compatibility against the official list before starting.
๐ค Deployment โ Getting the VM to KVM Efficiently
After conversion, deploy the VM to KVM. The default -o libvirt registers it locally. For remote deployment:
$ virt-v2v -ic vpx://vcenter.example.com/... \
-o null -os /var/lib/libvirt/images \
-oo output_mode=local \
-oo libvirt_uri=qemu+ssh://kvmhost.example.com/system \
'webserver-01'
This configuration:
-
-o null: skips local libvirt registration -
-os /var/lib/libvirt/images: writes disk to local scratch -
-oo libvirt_uri=โฆ: connects to remotelibvirtdover SSH - Then uses
scpto transfer disk,virDomainDefineXML()to define domain
This avoids double-transfer of large disks โ a key efficiency when migrating dozens of VMs.
On the target KVM host:
$ virsh list --all
Id Name State
----------------------------------
3 webserver-01 running
Check disk format:
$ qemu-img info /var/lib/libvirt/images/webserver-01-sda
image: webserver-01-sda
file format: qcow2
virtual size: 50 GiB
disk size: 14.2 GiB
backing file: (none)
cluster_size: 65536
Format specific details:
compat: 1.1
lazy refcounts: false
The disk size is much smaller than virtual size due to sparse allocation โ the file only consumes space for written blocks.
๐ Network Configuration
virt-v2v preserves NIC count and MAC addresses, but changes interface type from vmxnet3 to virtio. Ensure the KVM bridge (e.g., br0) is active and bridged to physical NIC.
If no IP is assigned:
- Verify the bridge:
ip link show br0 - Check libvirt network:
virsh net-list - Confirm firewall allows traffic on bridge interface
๐ Post-Migration Checks
After boot:
-
ip aโ confirm interface (e.g.,ens3) has link and correct IP -
dmesg | grep -i virtioโ verifyvirtio_net,virtio_blkloaded -
lsmod | grep -E "(vmxnet3|vmmouse)"โ ensure VMware drivers are absent - Test SSH, service uptime, and baseline performance
At this point, the vm migrate from vmware to kvm process is complete โ with a fully operational guest.
๐ฉ Final Thoughts
Migrating VMs from VMware to KVM is more than a cost play โ it's about adopting open, auditable infrastructure. virt-v2v enables this transition not through brute-force copying, but by integrating deeply with libvirt, QEMU, and libguestfs to transform guest configuration at the kernel level.
The tool doesnโt abstract complexity โ it applies it correctly. Youโre not relocating a VM; youโre converting its hardware identity from VMware to KVM. That involves device drivers, initramfs, bootloader logic, and registry entries on Windows. Skipping this (e.g., using qemu-img convert) results in boot failures, undetected disks, or degraded I/O.
When you vm migrate from vmware to kvm using virt-v2v, the result isnโt a ported VM โ itโs a native one, indistinguishable from a guest installed directly on KVM.
โ Frequently Asked Questions
Can I convert VMs without powering them off?
Yes, with VDDK. The VMware Virtual Disk Development Kit allows hot reading of .vmdk files, so the source VM can stay powered on during migration. However, only data present at the start of the transfer is captured unless application-consistent snapshots are used.
Does virt-v2v support encrypted VMware VMs?
No. VMware VM encryption (VMCE) is not supported by VDDK in offline mode. The VM must be decrypted in vCenter before conversion.
Can I automate migration of multiple VMs?
Yes. Use the vSphere API or vim-cmd to enumerate VMs, then script virt-v2v calls in a loop. Pair with SSH key authentication and shared storage (e.g., NFS) for efficient, scalable migrations.
๐ References & Further Reading
- VMware VDDK documentation โ API and deployment guide for high-speed disk access: docs.vmware.com
Top comments (1)
This guide is full of expensive lessons that most teams learn only after a failed migration weekend. The preโflight checklist alone belongs in every infra teamโs wiki.
My biggest headache wasnโt the migration itselfโit was the rollback. When something broke halfway, we needed a clean snapshot without data orphans. Do you have a specific strategy for making migration steps idempotent so they can be safely retried?
Wrap everything in a declarative playbook (Ansible or Terraform) that migrates a single โcanaryโ VM first, runs automated tests, then rolls the rest. Your excellent manual guide could become a repeatable factory.