DEV Community

Cover image for Resizing the disk on a Vagrant virtual machine
Paul Cochrane šŸ‡ŖšŸ‡ŗ
Paul Cochrane šŸ‡ŖšŸ‡ŗ

Posted on • Originally published at peateasea.de on

Resizing the disk on a Vagrant virtual machine

Originally published on peateasea.de.

When creating a Vagrant VM, one gets a pre-determined VM disk size. While this is more than enough in most use cases, sometimes it isnā€™t enough for what one wants to do. If the VM needs to receive a lot of data, the disk will fill up sooner rather than later. Resizing a Vagrant VM, however, isnā€™t trivial. Hereā€™s what I worked out when needing to resize a Vagrant VM myself.

While I was working at Drift+Noise,1 one way to replicate part of the production environment for QA testing in a development environment was to use a Vagrant VM. Because the system stored pre-processed satellite data (it was the backend which provided imagery to the visualisation frontend, IcySea) it received a lot of data. Eventually, our satellite coverage of the Arctic and Antarctic was so comprehensive that the Vagrant VMā€™s disk space was no longer sufficient for our storage needs. Not wanting to delete all the data and configuration and start over from scratch, I found out that it was possible to resize the VMā€™s disk to store the extra data. Hereā€™s how.

Standing on the shoulders of giants

I wasnā€™t the first person to run into this issue or the first to find a solution. These blogs and StackOverflow questions were particularly helpful to me in finding my own path through this maze.

Resizing a Vagrant VM:

Resizing a Linux partition:

Resizing a VirtualBox disk image:

A worked example

The best way to show how to do this is via a worked example. The plan looks like this:

Creating the example VM

Letā€™s begin by creating a bare Debian bullseye system. Hereā€™s the Vagrantfile which will create such a VM:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "bare-test-bullseye" do |bare_test_bullseye|
    bare_test_bullseye.vm.box = "debian/bullseye64"
    bare_test_bullseye.vm.network "private_network", ip: "192.168.23.116"
  end
end
Enter fullscreen mode Exit fullscreen mode

This VM uses the official Debian VirtualBox image and sets a private network address explicitly so that the address doesnā€™t clash with any other Vagrant VMs running on my dev box.

To create this VM, we use vagrant up:

$ vagrant up bare-test-bullseye
Enter fullscreen mode Exit fullscreen mode

SSH-ing into the machine, we see its initial disk usage via the df command:

vagrant ssh bare-test-bullseye
vagrant@bullseye:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 219M 0 219M 0% /dev
tmpfs 48M 428K 47M 1% /run
/dev/sda1 20G 833M 18G 5% /
tmpfs 237M 0 237M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
vagrant 437G 383G 54G 88% /vagrant
tmpfs 48M 0 48M 0% /run/user/1000
Enter fullscreen mode Exit fullscreen mode

The VMā€™s main disk is mounted as /dev/sda1 onto / (i.e. ā€œrootā€) and has a size of 20GB, of which 833MB are already used by the operating system. You can ignore the vagrant filesystem mounted on /vagrant; this is the map from the guest image to the host system. Yes, I only have 54G of space left on my laptop, why do you ask? šŸ˜‰

Weā€™re only using 5% of the possible space on the VMā€™s disk image. Letā€™s simulate an almost-full system by creating a large file.

Creating a large file with dd

We want to simulate having filled the VMā€™s disk with lots of data. We can do this by creating a large file of known size with the Linux dd utility. For instance, to create a file 1GB in size and filled only with zeroes, we could use this command:

$ dd if=/dev/zero of=smash bs=256M count=4
Enter fullscreen mode Exit fullscreen mode

This tells dd to get its input (if: ā€œinput fileā€) from /dev/zero, which is a special file that produces a constant stream of null characters. It then directs this stream of null characters to its output (of: ā€œoutput fileā€), whichā€“purely out of nostalgiaā€“Iā€™ve decided to call smash. We then tell dd to grab data in 256MB chunks (bs: ā€œbytesā€/ā€buffer sizeā€) and to use four such chunks count=4. Thus the above command pipes four chunks of 256MB null characters into a file called smash, creating a 1GB file.

However, we want to create a 17GB file, so you might think you could run

$ dd if=/dev/zero of=smash bs=1G count=17
Enter fullscreen mode Exit fullscreen mode

but unfortunately, youā€™d be wrong. At least within a Vagrant VM, this isnā€™t likely to work. I think this is because the amount of RAM is also restricted within the VM, hence creating a 1GB buffer in RAM simply isnā€™t possible. For example, creating a 1GB file with bs=1G produces this error

vagrant@bullseye:~$ dd if=/dev/zero of=smash bs=1G count=1
dd: memory exhausted by input buffer of size 1073741824 bytes (1.0 GiB)
Enter fullscreen mode Exit fullscreen mode

Even using half a GB doesnā€™t work:

vagrant@bullseye:~$ dd if=/dev/zero of=smash bs=512M count=1
dd: memory exhausted by input buffer of size 536870912 bytes (512 MiB)
Enter fullscreen mode Exit fullscreen mode

Fortunately, 256MB is possible. In that case, we can pipe 68 chunks of null dataā€“each 256MB in sizeā€“to produce a 17GB file (in other words, 17GB/256MB = 68):

vagrant@bullseye:~$ dd if=/dev/zero of=smash bs=256M count=68
68+0 records in
68+0 records out
18253611008 bytes (18 GB, 17 GiB) copied, 37.5588 s, 486 MB/s
Enter fullscreen mode Exit fullscreen mode

Now the VMā€™s disk is almost full

vagrant@bullseye:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 219M 0 219M 0% /dev
tmpfs 48M 428K 47M 1% /run
/dev/sda1 20G 18G 706M 97% /
tmpfs 237M 0 237M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
vagrant 437G 384G 53G 88% /vagrant
tmpfs 48M 0 48M 0% /run/user/1000
Enter fullscreen mode Exit fullscreen mode

with only 706MB of free space available in the root partition.

Shock! Horror! Our disk is almost full! What do we do? Letā€™s shut it down and resize the disk image.

Converting from VMDK to VDI

To convert the VMDK (Virtual Machine Disk; VMWareā€™s disk format) disk image to VDI (Virtual Disk Image; native to VirtualBox) format, the VM canā€™t be running, so we log out

# Ctrl-d
vagrant@bullseye:~$
logout
Connection to 127.0.0.1 closed.
Enter fullscreen mode Exit fullscreen mode

and halt the VM:

$ vagrant halt bare-test-bullseye
==> bare-test-bullseye: Attempting graceful shutdown of VM...
Enter fullscreen mode Exit fullscreen mode

To convert the disk image to the correct format, we need to find it first. When using Vagrant on a Linux system, the default VM provider is VirtualBox, hence youā€™ll likely find the VM disk images on the host system in the ~/VirtualBox VMs directory. In my case, this was:

~/VirtualBox VMs/vagrant_bare-test-bullseye_1724844829238_36188/
Enter fullscreen mode Exit fullscreen mode

Changing into this directory and listing its contents youā€™ll see output like this:

$ cd ~/VirtualBox VMs/vagrant_bare-test-bullseye_1724844829238_36188/
$ ls
box.vmdk vagrant_bare-test-bullseye_1724844829238_36188.vbox
Logs vagrant_bare-test-bullseye_1724844829238_36188.vbox-prev
Enter fullscreen mode Exit fullscreen mode

The file weā€™re interested in is box.vmdk. As we can see from the filename, itā€™s a VMDK image. Unfortunately, itā€™s not possible to resize VMDK images, hence we have to convert this to a VDI image (which is resizable). We convert into VDI format via the --format option to the clonehd subcommand of the vboxmanage utility, like so:

$ vboxmanage clonehd box.vmdk box.vdi --format VDI
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Clone medium created in format 'VDI'. UUID: dbb6e4dc-85f1-4608-a2f3-7539a8f99d48
Enter fullscreen mode Exit fullscreen mode

You will also see the command written as VBoxManage in other articles online. It doesnā€™t matter if you use vboxmanage or VBoxManage: they both are symlinks to the same central command:

$ ls -la $(which vboxmanage)
lrwxrwxrwx 1 root root 27 Oct 19 2023 /usr/bin/vboxmanage -> ../share/virtualbox/VBox.sh
$ ls -la $(which VBoxManage)
lrwxrwxrwx 1 root root 27 Oct 19 2023 /usr/bin/VBoxManage -> ../share/virtualbox/VBox.sh
Enter fullscreen mode Exit fullscreen mode

After cloning and converting to VDI format, youā€™ll find a file called box.vdi in the VMā€™s VirtualBox directory. Now weā€™re in a position to resize it.

Resizing the VDI disk image

To resize the image, we pass the --resize option to the modifyhd subcommand of the vboxmanage utility:

$ vboxmanage modifyhd box.vdi --resize 40000
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Enter fullscreen mode Exit fullscreen mode

where the --resize option takes a size in megabytes as its argument. In other words, the invocation used here (--resize 40000) means to resize the disk image to 40GB.

Our VDI disk image is now of the correct size. To continue, we need to detach the old image from the VM and attach the new image to it.

Here be dragons!

Do NOT try resizing the VMDK image by using modifyhd as we just did above; this will break the disk image. Hereā€™s what will happen if you try:

$ vboxmanage modifyhd box.vmdk --resize 40000
0%...
Progress object failure: NS_ERROR_CALL_FAILED
VBoxManage: error: Failed to resize medium
VBoxManage: error: Code NS_ERROR_CALL_FAILED (0x800706BE) - Call to remote object failed (extended info not available)
VBoxManage: error: Context: "RTEXITCODE handleModifyMedium(HandlerArg*)" at line 937 of file VBoxManageDisk.cpp
Enter fullscreen mode Exit fullscreen mode

We see that the process failed. Unfortunately, it also wrote data to the disk image, hence if you now try to boot the VM, youā€™ll get an error like this:

$ vagrant up bare-test-bullseye
Bringing machine 'bare-test-bullseye' up with 'virtualbox' provider...
==> bare-test-bullseye: Checking if box 'debian/bullseye64' version '11.20221219.1' is up to date...
==> bare-test-bullseye: Clearing any previously set forwarded ports...
==> bare-test-bullseye: Clearing any previously set network interfaces...
==> bare-test-bullseye: Preparing network interfaces based on configuration...
    bare-test-bullseye: Adapter 1: nat
    bare-test-bullseye: Adapter 2: hostonly
==> bare-test-bullseye: Forwarding ports...
    bare-test-bullseye: 22 (guest) => 2216 (host) (adapter 1)
==> bare-test-bullseye: Booting VM...
There was an error while executing `VBoxManage`, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["startvm", "ab03d375-60cf-44ef-bf68-b85f83a807de", "--type", "headless"]

Stderr: VBoxManage: error: Could not open the medium '/home/cochrane/VirtualBox VMs/vagrant_bare-test-bullseye_1724844829238_36188/box.vmdk'.
VBoxManage: error: VMDK: inconsistent references to grain directory in '/home/cochrane/VirtualBox VMs/vagrant_bare-test-bullseye_1724844829238_36188/box.vmdk' (VERR_VD_VMDK_INVALID_HEADER).
VBoxManage: error: VD: error VERR_VD_VMDK_INVALID_HEADER opening image file '/home/cochrane/VirtualBox VMs/vagrant_bare-test-bullseye_1724844829238_36188/box.vmdk' (VERR_VD_VMDK_INVALID_HEADER)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component MediumWrap, interface IMedium
Enter fullscreen mode Exit fullscreen mode

Oopsie. Back up your data, kids!

Detaching VMDK and attaching VDI

Assuming that you didnā€™t try to resize the VMDK image (honestly, donā€™t do that), weā€™re now able to detach the VMDK image from the VM and replace it with the resized VDI image. There are two ways you could do this: either by using the GUI or by using the command line. Letā€™s see how to use the GUI first.

Detaching and attaching via the GUI

Start VirtualBox, e.g. from the command line:

$ virtualbox
Enter fullscreen mode Exit fullscreen mode

The main VirtualBox window will appear, showing you a list of all available virtual machines. Click on the name of our example VM in the list of VMs; the item will appear highlighted.

VirtualBox VM manager: about to click on Settings for example VM

Click on the Settings button (the orange cog icon in the top middle of the screen). Youā€™ll now see the General settings for the VM.

VM General settings window

Click on the Storage item from the list in the left-hand panel. Youā€™ll now see the storage devices associated with the VM.

VM Storage: settings window

There are two controllers associated with the VM: the Floppy Controller Controller (why ā€œControllerā€ appears twice, I donā€™t know) and the SATA Controller (i.e. the controller for the hard disk).

Youā€™ll see that the box.vmdk disk image is associated with the SATA Controller. We want to detach this image from the VM. To do this, right-click on the box.vmdk item in the central pane. A context menu will appear with a single item: Remove Attachment.

VM Storage: remove attachment from SATA controller

After selecting the Remove Attachment context menu item, you should now see that there is no disk image associated with the SATA controller. Weā€™re now ready to attach the VDI image we created and resized earlier.

To attach the VDI disk image, right-click on the SATA Controller within the central pane. A context menu will appear with three options.

VM Storage: add hard disk to SATA controller

Select the Hard Disk option. Note that this option has a green plus symbol above the icon of a hard disk, so this implies that clicking on this option will add a hard disk. I think it would have been clearer if VirtualBox had called this option Add Hard Disk as that would have made the operation that this option performs much clearer. Oh well.

The Medium Selector window now appears, which allows you to select which medium (i.e. virtual hard disk a.k.a. disk image) to add to the SATA Controller. Youā€™ll see the VDI image we created earlier listed under Not Attached.

VM Storage: select VDI from Medium Selector

Select the box.vdi item and click on the Choose button at the bottom right-hand corner of the window. Youā€™ll now see the Storage settings dialog again, where the VDI image has been added to the SATA Controller.

VM Storage: VDI disk image attached to SATA controller

Congratulations! Youā€™ve successfully attached your newly resized disk image to the example VM. Before we boot the VM again, letā€™s see how to do this via the command line.

Detaching and attaching via the command line

Iā€™m a big fan of the command line as itā€™s often much easier to repeat processes consistently than it is by clicking around in a GUI. Iā€™d not found any discussion online which uses only the command line, so I decided to work out how to do this. Hereā€™s what I learned.

Before we can detach or attach a disk image, we need to collect a few bits of information about the VM. To do this we use the showvminfo subcommand of the vboxmanage utility. For instance, to see all info about our example VM, we use this command:

$ vboxmanage showvminfo vagrant_bare-test-bullseye_1724851891664_74757
< far too much info to show >
Enter fullscreen mode Exit fullscreen mode

The showvminfo subcommand produces a lot of output, hence we have to filter the output for the things we need to know. One thing we need to know is the name of the controller handling interactions between the VM and the disk image. Thus, we grep for the word Controller in the showvminfo output, e.g.:

$ vboxmanage showvminfo vagrant_bare-test-bullseye_1724851891664_74757 | grep -i controller
Graphics Controller: VMSVGA
Storage Controllers:
#0: 'SATA Controller', Type: IntelAhci, Instance: 0, Ports: 1 (max 30), Bootable
#1: 'Floppy Controller Controller', Type: I82078, Instance: 0, Ports: 1 (max 1), Bootable
Enter fullscreen mode Exit fullscreen mode

Here we see that we have two storage controllers, one called SATA Controller and the other called Floppy Controller Controller,2 which is the same situation we had in the previous section. To detach the currently attached disk image, we need to know the details about the disk imageā€™s connection to the controller. We can glean this information by asking grep for a couple of lines of context after a match for SATA Controller via grepā€™s -A option:3

$ vboxmanage showvminfo vagrant_bare-test-bullseye_1724851891664_74757 | grep -A 2 'SATA Controller'
#0: 'SATA Controller', Type: IntelAhci, Instance: 0, Ports: 1 (max 30), Bootable
  Port 0, Unit 0: UUID: 85c10a71-a459-4731-b587-609d16c4ef83
    Location: "/home/cochrane/VirtualBox VMs/vagrant_bare-test-bullseye_1724851891664_74757/box.vmdk"
Enter fullscreen mode Exit fullscreen mode

The important pieces of information to note here are the name of the VM (embedded in the Location: string), the path to the currently attached disk image (this will be useful when attaching a new image), the name of the controller (which weā€™ve already worked out) and the port number upon which the disk image is attached to the controller.

Confusingly, to remove a disk (a ā€œmediumā€ in the VirtualBox parlance), one needs to use the storageattach subcommand. Thus, to remove the VMDK image from the VM, we use this command:

$ vboxmanage storageattach vagrant_bare-test-bullseye_1724851891664_74757 \
    --storagectl 'SATA Controller' --port 0 --medium none
Enter fullscreen mode Exit fullscreen mode

where Iā€™ve split the command across multiple lines for readability.

If youā€™ve had problems trying to work out how to detach a VirtualBox image before, donā€™t worry, itā€™s not obvious from the (extensive) VBoxManage documentation that one has to use the storageattach subcommand. I only managed to learn the solution after I found an answer on StackOverflow to the question ā€œHow to detach vmdk using vboxmanage cli?ā€.

The --medium none part is important here. This is the bit that detaches the storage medium. Effectively, we replace the storage medium attached to port 0 of the SATA Controller on the VM called vagrant_bare-test-bullseye_1724851891664_74757 with ā€œnothingā€. One can use the mnemonic ā€œattach nothing to the occupied portā€ to try to remember to use storageattach --medium none to detach an image.

In some sense, using storageattach to detach a storage device is like going to the ā€œStartā€ menu on Windows to stop the computer. Oh well.

Now that weā€™ve detached the VMDK image, we can attach the VDI image in its place:

$ vboxmanage storageattach vagrant_bare-test-bullseye_1724851891664_74757 \
    --storagectl 'SATA Controller' --port 0 --type hdd \
    --medium /home/cochrane/VirtualBox\ VMs/vagrant_bare-test-bullseye_1724851891664_74757/box.vdi
Enter fullscreen mode Exit fullscreen mode

where, again, Iā€™ve split the command across multiple lines for readability.

Note that the --port, and --type options are mandatory. We use --port 0 again here because thatā€™s where the previous disk image was attached. Itā€™s also necessary to use the full path to the new disk image.

Filtering the showvminfo output, looking for the SATA Controller text and displaying two lines of context we see the new disk image attached to the controller:

$ vboxmanage showvminfo vagrant_bare-test-bullseye_1724851891664_74757 | grep -C 2 'SATA Controller'
VM process priority: default
Storage Controllers:
#0: 'SATA Controller', Type: IntelAhci, Instance: 0, Ports: 1 (max 30), Bootable
  Port 0, Unit 0: UUID: 5bcc2277-3ee2-4dd5-a7c7-c90d70023281
    Location: "/home/cochrane/VirtualBox VMs/vagrant_bare-test-bullseye_1724851891664_74757/box.vdi"
Enter fullscreen mode Exit fullscreen mode

Resizing the filesystem

Now that weā€™ve expanded the disk image to its new size, the final thing we need to do is resize the filesystem. We do this within the VM via the resize2fs command.

First, letā€™s boot the system and SSH into it:

$ vagrant up bare-test-bullseye
$ vagrant ssh bare-test-bullseye
Enter fullscreen mode Exit fullscreen mode

Having a look around, we can see that the large file we created earlier is still there:

vagrant@bullseye:~$ ls -l
total 17825796
-rw-r--r-- 1 vagrant vagrant 18253611008 Aug 28 12:55 smash
Enter fullscreen mode Exit fullscreen mode

What youā€™ll also notice is that the disk still seems to be only 20GB in size:

vagrant@bullseye:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 219M 0 219M 0% /dev
tmpfs 48M 428K 47M 1% /run
/dev/sda1 20G 18G 706M 97% /
tmpfs 237M 0 237M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
vagrant 437G 385G 53G 88% /vagrant
tmpfs 48M 0 48M 0% /run/user/1000
Enter fullscreen mode Exit fullscreen mode

Whatā€™s happened? Well, weā€™ve increased the disk size, but we havenā€™t resized the filesystem to match it.

Usually, weā€™d need to fix the partition table with fdisk to match the new disk size, yet youā€™ll find that it will show the correct size:

vagrant@bullseye:~$ sudo fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sda: 39.06 GiB, 41943040000 bytes, 81920000 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1175070d

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 81919966 81917919 39.1G 83 Linux
Enter fullscreen mode Exit fullscreen mode

I found this surprising. My previous experience (with real hardware and admittedly with RAID systems) has been that we need to match the partition table to the new disk geometry before being able to resize the filesystem. It seems that VirtualBox has done this work for us. Perhaps this happens as part of using the --resize option to vboxmanage? No idea.

Leaving the partition table as-is (Iā€™ll show how to set the partition table soon) and resizing the filesystem, we get the expected size:

vagrant@bullseye:~$ sudo resize2fs /dev/sda1
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 5
The filesystem on /dev/sda1 is now 10239739 (4k) blocks long.

vagrant@bullseye:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 219M 0 219M 0% /dev
tmpfs 48M 428K 47M 1% /run
/dev/sda1 39G 18G 19G 49% /
tmpfs 237M 0 237M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
vagrant 437G 385G 53G 88% /vagrant
tmpfs 48M 0 48M 0% /run/user/1000
Enter fullscreen mode Exit fullscreen mode

Weā€™ve resized the disk and its filesystem and the data is still intact. Weā€™ve completed what we set out to do! Yay!

Fixing the partition table (if one needs to)

As I mentioned above, I thought it would be necessary to create a new partition table with the new disk geometry before resizing the filesystem. For instance, the resize2fs man page mentions fixing the partition size before resizing:

The resize2fs program does not manipulate the size of partitions. If you wish to enlarge a filesystem, you must make sure you can expand the size of the underlying partition first.

So, if you want to (or need to) fix up the partition table, here is how youā€™d do it.

Start fdisk and display the current situation by using the p command (for print):

vagrant@bullseye:~$ sudo fdisk /dev/sda

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p
Disk /dev/sda: 39.06 GiB, 41943040000 bytes, 81920000 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1175070d

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 81919966 81917919 39.1G 83 Linux
Enter fullscreen mode Exit fullscreen mode

To fix the partition size, we have to delete the old partition entry (note that changes are only kept in memory until one chooses to write the changes to disk). To delete a partition entry, we use the d command (delete):

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
Enter fullscreen mode Exit fullscreen mode

Since thereā€™s only one partition, itā€™s obvious which partition to select. Displaying the partition table with p again, we can see that the partition is gone:

Command (m for help): p
Disk /dev/sda: 39.06 GiB, 41943040000 bytes, 81920000 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1175070d
Enter fullscreen mode Exit fullscreen mode

We create a new partition with the n command (new) and choose the default values.

Command (m for help): n
Partition type
   p primary (0 primary, 0 extended, 4 free)
   e extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (1-4, default 1):
First sector (2048-81919999, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-81919999, default
81919999):

Created a new partition 1 of type 'Linux' and of size 39.1 GiB.
Partition #1 contains a ext4 signature.

Do you want to remove the signature? [Y]es/[N]o:
Do you want to remove the signature? [Y]es/[N]o: N
Enter fullscreen mode Exit fullscreen mode

I chose to keep the signature here because I was unsure if deleting it was a good idea; deleting things is usually a bad idea. Keeping the signature didnā€™t cause any later problems in my test steup.

To make the change to the partition table permanent, we write the changes with the w command (write):

Command (m for help): w

The partition table has been altered.
Syncing disks.
Enter fullscreen mode Exit fullscreen mode

Ok, that looks good. We can now resize the filesystem with resize2fs as we did in the previous section:

vagrant@bullseye:~$ sudo resize2fs /dev/sda1
resize2fs 1.46.2 (28-Feb-2021)
Filesystem at /dev/sda1 is mounted on /; on-line resizing required
old_desc_blocks = 3, new_desc_blocks = 5
The filesystem on /dev/sda1 is now 10239744 (4k) blocks long.
Enter fullscreen mode Exit fullscreen mode

Checking the size of the filesystem with df, we see that it has the correct size:

vagrant@bullseye:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 219M 0 219M 0% /dev
tmpfs 48M 428K 47M 1% /run
/dev/sda1 39G 18G 19G 49% /
tmpfs 237M 0 237M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
vagrant 437G 385G 53G 88% /vagrant
tmpfs 48M 0 48M 0% /run/user/1000
Enter fullscreen mode Exit fullscreen mode

Fortunately, the disk doesnā€™t have any swap space associated with it, otherwise weā€™d have had to move it as well. For details about how to move swap space after resizing a partition, check out this post: Howto ā€“ Resize linux partition and move swap space.

Our disk has been resized! Great!

Setting the Vagrant VMā€™s disk size at creation time

If you have a good idea of how large your disk needs to be before creating the VM (and you use VirtualBox as the VM provider), you can set the value yourself via the vagrant-disksize plugin.

To install the plugin use the vagrant plugin command:

$ vagrant plugin install vagrant-disksize
Enter fullscreen mode Exit fullscreen mode

Then use the disksize.size configuration setting in your Vagrantfile, e.g.:

VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "xenial" do |xenial|
    xenial.vm.box = 'ubuntu/xenial64'
    xenial.disksize.size = '50GB'
  end
end
Enter fullscreen mode Exit fullscreen mode

In some cases, this might only stave off the inevitable and the resizing discussion above should be of use. In other cases, this will keep your VM chugging along happily for a long time.

Between when I needed to do this and now, it seems that setting the disk size is now possible in Vagrant natively. The new functionality also seems to handle resizing for you. Hereā€™s an example Vagrantfile configuration from the Vagrant disk documentation:

Vagrant.configure("2") do |config|
  config.vm.define "hashicorp" do |h|
    h.vm.box = "hashicorp/bionic64"
    h.vm.provider :virtualbox

    h.vm.disk :disk, size: "100GB", primary: true
  end
end
Enter fullscreen mode Exit fullscreen mode

Note that the primary: true setting is important so that Vagrant doesnā€™t create a new drive of this specific size but uses the main drive already attached to the VM.

Itā€™s not mentioned explicitly in the docs, but my guess is that Vagrant (with the help of VirtualBox) reisizes the disk image upon restarting the VM.

Wrapping up

So there ya go. If you ever happen to have the same problem and need to know how to resize your Vagrant VMā€™s disk, hopefully, the information in this post will be useful!

  1. Iā€™m available for freelance Python/Perl backend development and maintenance work. Contact me at paul@peateasea.de and letā€™s discuss how I can help solve your businessā€™ hairiest problems. ā†©

  2. Again, Iā€™ve got no idea why the word ā€œControllerā€ appears here twice. ā†©

  3. grep can show a specified number of lines of context close to a matching line. For instance, to show two lines after a match, use -A 2. To display three lines before a match use -B 3. To show five lines of context before and after a matching line, use -C 5. ā†©

Top comments (0)