DEV Community

David Tio
David Tio

Posted on • Originally published at blog.dtio.app

Resize a Tiny SLES 16 Cloud Image with qemu:base on Ubuntu 26.04

Resize a Tiny SLES 16 Cloud Image with qemu:base on Ubuntu 26.04

Quick one-liner: The SLES 16 minimal cloud image boots fast, but it left me with about 3 MB free. That is not enough room to do real work, so we resize it on our brand new qemu:base.


๐Ÿค” Why This Matters

In Post #4, SLES gave us the most interesting cloud-image result: fast boot, enterprise Linux, and almost no free space.

The VM works. It boots quickly. Cloud-init configures the user. In Post #5, we made it reachable over the network.

Then you log in and see this:

sysadmin@kvmpodman:~> df -Th
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda3      xfs       742M  739M  3.0M 100% /
Enter fullscreen mode Exit fullscreen mode

About 3 megabytes free. I had barely done anything with this VM yet.

That is painful by today's standard. Installing one package can fill that up. A normal zypper update is already too much. The VM is technically running, but there is no breathing room.

Ubuntu cloud images are usually generous enough to be usable without resizing. SLES 16 minimal is not, and that makes it the perfect example for learning how cloud image resizing actually works.

To resize it properly, our qemu:base toolbox needs the right disk utility. qemu-img handles the virtual disk size from outside the guest.

Ubuntu 26.04 LTS just dropped, so this is a good chance to see how it holds up as the new base for our QEMU toolbox. We will rebuild qemu:base on Ubuntu 26.04, keep qemu-img available there, then use that container to fix the SLES disk.

In this post we will:

  • Rebuild qemu:base on Ubuntu 26.04.
  • Use the refreshed toolbox to grow the SLES qcow2 file.
  • Expand the root partition inside the VM.
  • Grow the filesystem.
  • Verify that the VM finally has room to work.

๐Ÿ“‹ Prerequisites

  • Podman
  • ~/vm directory with cloud images
  • A SLES cloud image in ~/vm, already seeded with cloud-init from Post #4
  • SSH access to the VM from Post #5

The examples use this file:

~/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
Enter fullscreen mode Exit fullscreen mode

If your image has a different name, swap the path in the commands below. If this is a fresh SLES cloud image, boot it once with the cloud-init-sles.iso from Post #4 before following this post. The resize steps assume the VM already has the sysadmin user and SSH access configured.


๐Ÿงฐ Step 1: Rebuild qemu:base on Ubuntu 26.04

Our earlier qemu:base image was only meant to get QEMU running. For this episode, we will refresh it on Ubuntu 26.04 and include the disk tools we need:

FROM ubuntu:26.04

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        qemu-system-x86 \
        qemu-utils \
        iproute2 \
        bash \
        ca-certificates \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

COPY run-vm.sh /run-vm.sh
RUN chmod +x /run-vm.sh

WORKDIR /vms
CMD ["/bin/bash"]
Enter fullscreen mode Exit fullscreen mode

Build it:

$ cd ~/vm
$ podman build -t qemu:base .
Enter fullscreen mode Exit fullscreen mode

Check the tools:

$ podman run --rm qemu:base qemu-img --version
Enter fullscreen mode Exit fullscreen mode

The Ubuntu version here is about the toolbox. It does not mean the guest VM is Ubuntu. We are using an Ubuntu-based container to manage and boot a SLES cloud image.


๐Ÿ“Š Step 2: Check the Problem

Create the same Podman network we used in Post #5:

$ podman network create mynet
Enter fullscreen mode Exit fullscreen mode

Then boot the already-seeded SLES VM with SSH forwarding:

$ podman run --rm -it \
    --name sles-resize \
    --network=mynet \
    --device /dev/kvm \
    -p 2222:22 \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-system-x86_64 \
        -enable-kvm -cpu host \
        -nographic -m 1024 \
        -drive file=/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2,format=qcow2,if=virtio \
        -netdev user,id=net0,hostfwd=tcp::22-:22 \
        -device virtio-net-pci,netdev=net0
Enter fullscreen mode Exit fullscreen mode

In another terminal, SSH into the VM through the forwarded host port:

$ ssh -p 2222 sysadmin@localhost
Enter fullscreen mode Exit fullscreen mode

Once you are in, check the root filesystem:

sysadmin@kvmpodman:~> df -Th /
Filesystem     Type      Size  Used Avail Use% Mounted on
/dev/vda3      xfs       742M  739M  3.0M 100% /
Enter fullscreen mode Exit fullscreen mode

Also check the block devices:

sysadmin@kvmpodman:~> lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
vda    254:0    0  1.3G  0 disk
โ”œโ”€vda1 254:1    0    2M  0 part
โ”œโ”€vda2 254:2    0  512M  0 part /boot/efi
โ””โ”€vda3 254:3    0  806M  0 part /
Enter fullscreen mode Exit fullscreen mode

The disk is small, and the root partition is smaller. We need to grow both.

Shut the VM down cleanly before touching the disk image:

sysadmin@kvmpodman:~> sudo poweroff
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”ง Step 3: Back Up the Image

Before resizing any disk image, make a copy:

$ cp ~/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2 \
     ~/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2.bak
Enter fullscreen mode Exit fullscreen mode

This is cheap insurance. If you mistype a partition number later, you can go back.


๐Ÿ”ง Step 4: Grow the qcow2 File

Use qemu-img resize from the qemu:base container. This grows the virtual disk container, but it does not grow the partition or filesystem inside it yet.

$ podman run --rm \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-img info /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
image: /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
file format: qcow2
virtual size: 1.29 GiB (1385168896 bytes)
disk size: 339 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
    protocol type: file
    file length: 339 MiB (355467264 bytes)
    disk size: 339 MiB
Enter fullscreen mode Exit fullscreen mode

The guest sees a 1.29 GiB virtual disk, but the qcow2 file only uses 339 MiB on the host. That is normal. qcow2 can be compressed and thin-provisioned, so host disk usage and guest-visible disk size are not the same thing.

Add 10 GB:

$ podman run --rm \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-img resize /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2 +10G
Image resized.
Enter fullscreen mode Exit fullscreen mode

Check it:

$ podman run --rm \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-img info /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
image: /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
file format: qcow2
virtual size: 11.3 GiB (12122587136 bytes)
disk size: 339 MiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
    extended l2: false
Child node '/file':
    filename: /vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2
    protocol type: file
    file length: 339 MiB (355467776 bytes)
    disk size: 339 MiB
Enter fullscreen mode Exit fullscreen mode

The virtual size is now bigger, but the host disk usage is still 339 MiB. That is the thin-provisioning part of qcow2: the guest can see the larger disk immediately, but the host file grows only as data is written.


๐Ÿ”ง Step 5: Verify the Guest Expanded

Boot the VM again with the same SSH forwarding:

$ podman run --rm -it \
    --name sles-resize \
    --network=mynet \
    --device /dev/kvm \
    -p 2222:22 \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-system-x86_64 \
        -enable-kvm -cpu host \
        -nographic -m 1024 \
        -drive file=/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2,format=qcow2,if=virtio \
        -netdev user,id=net0,hostfwd=tcp::22-:22 \
        -device virtio-net-pci,netdev=net0
Enter fullscreen mode Exit fullscreen mode

Then reconnect over SSH:

$ ssh -p 2222 sysadmin@localhost
Enter fullscreen mode Exit fullscreen mode

Inside the VM, check both the block devices and the mounted filesystem:

sysadmin@kvmpodman:~> lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sr0     11:0    1 1024M  0 rom
vda    253:0    0 11.3G  0 disk
โ”œโ”€vda1 253:1    0    2M  0 part
โ”œโ”€vda2 253:2    0  512M  0 part /boot/efi
โ””โ”€vda3 253:3    0 10.8G  0 part /

sysadmin@kvmpodman:~> df -Th /
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/vda3      xfs    11G  948M  9.8G   9% /
Enter fullscreen mode Exit fullscreen mode

That is the clean success case:

  • lsblk shows the disk at 11.3G.
  • lsblk shows the root partition at 10.8G.
  • df -Th / shows the mounted XFS filesystem at 11G with almost 10G available.

In this SLES 16 cloud image, cloud-init grew the root partition and filesystem automatically on boot after the qcow2 virtual size changed. The important number is not Used. The image was already effectively full before resizing. The win is Avail: from about 3M to 9.8G.

From about 3 MB free to almost 10 GB available. That is enough room to update the system, install tools, and use the VM like a real machine.


๐Ÿ”ง If the Guest Does Not Auto-Grow

Most cloud images include grow tools because cloud platforms resize disks all the time. If your image does not auto-grow, check which layer is still small.

If lsblk still shows /dev/vda3 at 806M, grow partition 3 manually:

sysadmin@kvmpodman:~> sudo growpart /dev/vda 3
Enter fullscreen mode Exit fullscreen mode

If df -Th / still shows the old 742M filesystem, grow it manually. For XFS, grow the mounted filesystem by mount point:

sysadmin@kvmpodman:~> sudo xfs_growfs /
Enter fullscreen mode Exit fullscreen mode

For ext4, use the block device instead:

sysadmin@kvmpodman:~$ sudo resize2fs /dev/vda3
Enter fullscreen mode Exit fullscreen mode

Then check again:

sysadmin@kvmpodman:~> lsblk
sysadmin@kvmpodman:~> df -Th /
Enter fullscreen mode Exit fullscreen mode

If growpart is missing and the root filesystem is already this full, the safer option is to go back to your backup image or add a second disk instead of fighting package installation inside the cramped guest.


๐Ÿ”ง Add a Second Disk Instead

Sometimes you do not want to resize the root filesystem. A separate data disk is simpler and safer for databases, logs, or application data.

Power off the VM first:

sysadmin@kvmpodman:~> sudo poweroff
Enter fullscreen mode Exit fullscreen mode

Create a new qcow2 disk:

$ podman run --rm \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-img create -f qcow2 /vm/extra-disk.qcow2 20G
Enter fullscreen mode Exit fullscreen mode

Attach it when starting the VM:

$ podman run --rm -it \
    --name sles-resize \
    --network=mynet \
    --device /dev/kvm \
    -p 2222:22 \
    -v ~/vm:/vm:z \
    qemu:base \
    qemu-system-x86_64 \
        -enable-kvm -cpu host \
        -nographic -m 1024 \
        -drive file=/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2,format=qcow2,if=virtio \
        -drive file=/vm/extra-disk.qcow2,format=qcow2,if=virtio \
        -netdev user,id=net0,hostfwd=tcp::22-:22 \
        -device virtio-net-pci,netdev=net0
Enter fullscreen mode Exit fullscreen mode

Reconnect over SSH:

$ ssh -p 2222 sysadmin@localhost
Enter fullscreen mode Exit fullscreen mode

Inside the VM, format it and create the mount point:

sysadmin@kvmpodman:~> sudo mkfs.xfs /dev/vdb
sysadmin@kvmpodman:~> sudo mkdir /data
Enter fullscreen mode Exit fullscreen mode

Make it persistent by UUID instead of /dev/vdb. Device names can change if you attach disks in a different order later.

sysadmin@kvmpodman:~> sudo blkid /dev/vdb
/dev/vdb: UUID="11111111-2222-3333-4444-555555555555" BLOCK_SIZE="512" TYPE="xfs"

sysadmin@kvmpodman:~> sudo sh -c 'echo "UUID=11111111-2222-3333-4444-555555555555 /data xfs defaults 0 0" >> /etc/fstab'
Enter fullscreen mode Exit fullscreen mode

Mount everything from /etc/fstab, then verify /data:

sysadmin@kvmpodman:~> sudo mount -a
sysadmin@kvmpodman:~> df -Th /data
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/vdb       xfs    20G  424M   20G   3% /data
Enter fullscreen mode Exit fullscreen mode

Use this when you need more storage but do not need the root filesystem itself to be bigger.


๐Ÿงน Cleanup

If you created a backup and no longer need it:

$ rm ~/vm/SLES-16.0-Minimal-VM.x86_64-Cloud-GM.qcow2.bak
Enter fullscreen mode Exit fullscreen mode

If you created the extra disk and want to remove it, clean it up from inside the VM first:

sysadmin@kvmpodman:~> sudo umount /data
sysadmin@kvmpodman:~> sudo sed -i '\| /data xfs |d' /etc/fstab
sysadmin@kvmpodman:~> sudo poweroff
Enter fullscreen mode Exit fullscreen mode

Then remove the disk image from the host:

$ rm ~/vm/extra-disk.qcow2
Enter fullscreen mode Exit fullscreen mode

โœ… What You've Built

  • โœ… Rebuilt qemu:base on Ubuntu 26.04
  • โœ… Resized a qcow2 cloud image with qemu-img
  • โœ… Verified cloud-init expanded the root partition automatically
  • โœ… Verified the XFS root filesystem grew automatically
  • โœ… Used a second disk as an alternative

๐Ÿ”œ What's Next?

The VM now has room to breathe. We rebuilt the toolbox, expanded the SLES cloud image, and confirmed the guest can actually use the space.

There is still more to explore with KVM on Podman.

See you in the next episode.


Found this helpful?

Top comments (0)