DEV Community

Cover image for How To Resize LVM Volumes Dynamically Using Cloud-Init (Step-By-Step)
Patima Poochai
Patima Poochai

Posted on

How To Resize LVM Volumes Dynamically Using Cloud-Init (Step-By-Step)

Logical Volume Manager (LVM) is the preferred storage framework for Linux VMs. It's easy to resize LVM volumes, migrate data between devices, and combine multiple physical disks into one easy-to-mange volume. These features are especially useful in a virtualized environment, as they enable you to do these tasks without physical access to the machine.

When creating VM templates, it's common to create a fixed-size volume for the root partition and use cloud-init to grow the partition size dynamically when cloning the template (unless you want to create multiple templates with each one having 10G, 11G, 12G... storage and so on). However, you can't simply use the built-in growpart module to resize logical volumes. It only works on partitions.

But that doesn't mean you should avoid using LVM in your templates. After a few trials and tribulations, I've found a workaround that allows cloud-init to automatically resize logical volumes, as well as a few pitfalls you should avoid when using cloud-init with Debian machines. Here's what I've learned.

Animated GIF depicting a fight between Debian and LVM as the fight between the Boss and Snake from Metal Gear Solid 3.

The Two Steps to Grow LVM Volumes with Cloud-Init

The partition configuration of the Linux VM. The third partition has 30 GB of storage.

Here's the scenario. There are 3 partitions on my VM template, with /dev/sda3 containing the root logical volume (LV) named ubuntu--vg-ubuntu--lv. The VM's disk had 30 GB of storage, but I've resized the drive to be 70 GB. The root LV is still 30 GB, so I need cloud-init to automatically resize the volume to fill the remaining space.

Grow the Partition

First, we need to grow the partition using the growpart module. While we can't use growpart directly on the root LV, we can use it to grow the partition that contains the underlying physical volume (PV) of the root volume.

Create a cloud-init configuration in /etc/cloud/cloud.cfg.d/90-LVM.cfg, and add the following:

growpart:
  devices: [/dev/sda3]
Enter fullscreen mode Exit fullscreen mode

To test the config, set cloud-init to run on the next boot and reboot the computer.

sudo cloud-init clean -r
Enter fullscreen mode Exit fullscreen mode

Here's how the partitions look after growpart.

The partitions of the VM, now with the third partition having 68 GB.

Growpart expanded /dev/sda3 to 68 GB using the newly added free space in the storage disk.

PVS command showing that the PV of the root volume was expanded.

Because the sda3 partition is mapped to the root PV, debian-vg, it also expanded the underlying PV of the volume. We can now expand the LV to fill the space in the physical volume.

Grow the Root Volume

Second, we need to grow the logical volume. There is no built-in cloud-init module to manage LVM volumes, but we can use the runcmd module to execute the shell commands to resize the logical volume.

One caveat is that we have to make sure runcmd only triggers after the growpart module. Otherwise, we are expanding the LV before the PV is expanded. We can check the modules execution order by looking at the default cloud-init config at /etc/cloud/cloud.cfg and making sure that runcmd is executed after growpart.

Snippet of the boot stages and their modules.

Modules in the "config" stages will always run after the "init" stage, so make sure that runcmd is under cloud_config_module.

Fun fact: runcmd actually defers the execution until the scripts_user module in the "final" boot stage, so make sure that scripts_user is under cloud_final_module as well.

Excerpt from the official documentation describing how the runcmd module runs its script in the final boot stage.

Now, we can add a runcmd command to resize the root LV into /etc/cloud/cloud.cfg.d/90-LVM.cfg. In my case, the root path (/) of my VM is mounted to a logical volume named ubuntu-lv, which is mapped to a PV named ubuntu-vg, so my runcmd looks like this:

# append to /etc/cloud/cloud.cfg.d/90-LVM.cfg
runcmd:
  - [lvresize, -l, +100%FREE, -r, /dev/ubuntu-vg/ubuntu-lv]
Enter fullscreen mode Exit fullscreen mode

This command resizes the volume using all of the remaining space (+100%FREE) while also resizing the file system in the volume (-r).

Test the config again by rebooting the machine and rerunning cloud-init:

sudo cloud-init clean -r
Enter fullscreen mode Exit fullscreen mode

Upon reboot, cloud-init should resize the root LV to use the remaining free space in the storage disk.

Partitions of the VM, now the root volume is using all of the available space.

Troubleshooting

If cloud-init isn't resizing the volumes as expected, your first troubleshooting step should be checking the logs at /var/log/cloud-init.log.

For example, I had some confusion regarding the possible values for the runcmd module when I started working on this project. Look at this:

Snippet of the runcmd module with its schema.

Does this mean the module accepts 1) an array containing arrays containing strings, or 2) an array containing arrays of strings, or just a string? To get a practical understanding of the schema, I first tried using a simple string for the module.

A snippet of the runcmd module with only a string as its input.

After a reboot, cloud-init ran the configuration, but the LV is still the same size. Take a look at the logs:

Snippet of the cloud-init logs. One of the errors shows that the runcmd module did not expect a string as its input.

Note the TypeError: Input to shellify was type 'str'. expected list or tuple line. This line tells us that the module expects the inputs to look like this in practice:

# this is correct
Array [
  - StringArray ["a", "b"]
  - String "abc"
  - Null null
]

# not this
Array [] || String "abc"
Enter fullscreen mode Exit fullscreen mode

Niche issue? Probably. But if you need to write a more advanced runcmd config, reading /var/log/cloud-init.log can help clarify some of the ambiguity.

Debian-specific Issues ("No free sectors")

Comedic depiction of Debian and LVM as the Boss and Snake from the game Metal Gear Solid 3. Debian is injuring LVM.

The steps above should work for most cases. However, when using cloud-init and growpart to resize volumes in Debian VMs, you might run into the following error:

Snippet of the growpart error. The root volume is at sda5, and a line in this error shows that the module failed to resize partition sda3.

Note the /dev/sda3: No free sectors available line. Our root LV is located on sda5, and growpart failed to resize that partition because it failed to resize sda3. But... that disk doesn't exist. Why is growpart trying to resize a non-existent disk?

Snippet of the growpart documentation.

In the documentation, growpart will only resize the last partition on the disk. In practice, this means the last partition. As in, the partition has to be last numerically, and you cannot skip numbers. If we want growpart to resize our root LV, it has to be on partition sda3 rather than sda5. In other words, 1,2,3 and not 1,2,5.

But why does our VM create the partitions in this way? It's because the MBR partitioning scheme requires any LVM storage to be on sda5. A GPT partition scheme won't have this issue, so we should change the partitioning scheme of our VM to use GPT instead.

But how do you make Debian use GPT over MBR? By using UEFI. The Debian installer automatically decides the partitioning scheme based on whether you're using BIOS or UEFI. So if we want the partitions to be in order without skipping numbers, we have to complete the installation process with UEFI enabled.

Convoluted? Yes, but the fix is simple: use UEFI during installation. In Proxmox, you can enable UEFI by changing the BIOS option to OVMF (UEFI):

Snippet of the BIOS setting in Proxmox. The BIOS is set to OVMF (UEFI).

Then add an EFI disk:
Snippet of the hardware options in Proxmox. The

Then go through the installer as normal and choose Guided - use entire disk and set up LVM during installation. After completing the installation, your partitions should now be in order:
Partitions of the VMs, the root volume is now at sda3.

These are the partitions of my Debian VM after using UEFI during installation. Note how the root LV is now at /dev/sda3.

I could now use growpart and runcmd to resize the root LV. This is my configuration for the Debian VM:

The cloud-init config for the Debian VM. The config creates a default user called

Here's the result after rerunning cloud-init:
Partitions of the Debian VM, now the root volume is using all of the available space on the storage disk.

Cloud-init resized the root LV without the No free sectors issue.

Troubleshooting "status: disabled" and Unrecognized Cloud-init Drive Issues

You might run into other issues with cloud-init while setting up a Debian VM. Here's a short guide on how I troubleshoot and resolve these issues.

First, I got this error:
Snippet of the

Cloud-init wasn't running, and running a status check tells us that it was disabled by the cloud-init-generator, but it doesn't tell us why.

Maybe the source code of cloud-init-generator can tell us more about the cause of the issue. You can find cloud-init-generator and its logs by running this command:

dpkg-query -L cloud-init | less
Enter fullscreen mode Exit fullscreen mode

The command shows all the files that were installed with the cloud-init package. Look for cloud-init-generator:
Filtered result of the dpkg-query command. It shows that cloud-init-generator is located in the /usr/lib directory.

It's in /usr/lib/systemd/system-generators/cloud-init-generator. Here's the first few lines of the file:
Snippet of the system-generators file. One of the lines contains the location of the log file for this script.

Note the LOG_F variable. That's the location of the log file where we can learn more about why Cloud-init was disabled.

Snippet of the log file. A line describes that the script ran but didn't find any data source.

Cloud-init used the ds-identify component to identify data sources, and it couldn't find any valid configuration sources. However, I've attached a cloud-init drive to the VM via the Proxmox GUI, so what's going on?

Data Sources Formatting

Let's check the logs related to ds-identify at /run/cloud-init/ds-identify.log (also recommended by the documentation).

A line from the log file describing how the field datasource_list wasn't found.

From the WARN: no datasource_list found message, it seems like one of the problems is that the configuration expects you to use datasource_list in your configuration.

Here's how I changed the data sources configuration:
Text snippet showing the wrong and right way to write the data sources section. The data sources list is on a single line with a list as the value.

And here's the output of /run/cloud-init/ds-identify.log after applying this change:
A line from the ds-identify log file. The script can now read the data sources list.

Cloud-init now detects the sources list, but it still doesn't recognize the cloud-init device. I was puzzled by this issue for a while, until I found something a few days later.

Cloud-init Drive Interface

As of March 2026, according to this post and this post, there is a compatibility issue between IDE devices and OVMF. In practice, cloud-init drives that use IDE aren't recognized by the VM if you're using OVMF (UEFI).

The fix is simple: use SCSI for your cloud-init drive. When you're creating the cloud-init drive, choose the SCSI option in the Proxmox GUI:
Proxmox GUI for the CloudInit Drive setting. The drive is set to SCSI.

For example, here are the hardware options for my VM:
The hardware options in Proxmox. The cloud-init drive is set to IDE.

Here's the list of block devices recognized by the VM. Note how, despite having two cdrom drives and one of them being the cloud-init drive, only the CD/DVD drive shows up in the list.

Block devices list of the VM. Cloud-init drive is not shown.

Now, I changed the cloud-init drive to use SCSI:
The hardware options in Proxmox. The cloud-init drive is set to SCSI.

Here's the updated list of block devices. Note how the VM now recognizes the cloud-init drive at /dev/sr0. The VM should now recognize the cloud-init drive and execute your configuration on startup.
Block devices list of the VM. Cloud-init drive is on the list.

Closing

Questions? Thoughts? Feel free to leave a comment!

Need someone skilled in RHEL, Kubernetes, and AWS? I'm open to work! View my portfolio and reach out via LinkedIn or Mastodon.

Top comments (0)