DEV Community

Cover image for How to Build a Home Cloud with Proxmox
Mikhail Panichev
Mikhail Panichev

Posted on

How to Build a Home Cloud with Proxmox

One day, I realized I wanted a home server to experiment with deployment and infrastructure. Posts on r/homelab and r/selfhosted largely inspired this idea.

At first, I considered going the bare-metal route and building a k3s cluster on several Raspberry Pis. However, after comparing costs, limitations, and capabilities, I decided to buy a couple of Lenovo ThinkCentre mini PCs and install Proxmox Virtual Environment (PVE) on them.

At work I’ve actively used cloud providers, so it was important for me to have a user-friendly experience with virtual servers — similar to what a compute cloud offers.

Key requirements for the virtualization infrastructure:

  • VMs may be on an isolated network, inaccessible from the home network;
  • easy creation of a Debian server with cloud-init;
  • server management via Terraform.

After spending several days searching online, tweaking configs, and reinstalling everything from scratch a couple of times, I came up with a working solution and identified some pitfalls.

Preparing a Private Network

I decided to ensure basic security by placing virtual servers on a network separate from other devices.

Instead of spending time on VLANs, complex routing, and firewalls, I used a separate Linux Bridge on the PVE host.

The diagram below shows the conceptual network layout. All VMs connect to the private network interface vmbr1. VMs that need to be accessible from the home network also connect to the vmbr0 interface.
To provide internet access to private VMs, NAT is implemented from vmbr1 through vmbr0.

A schematic diagram showing virtual machines on the PVE private network 10.x.x.x, with one of them accessible on the home network 192.168.1.x

You can add the private network interface through the Proxmox admin or using default Debian tooling.

Let’s go with the second option and add the following configuration to the /etc/network/interfaces file:

auto vmbr1
iface vmbr1 inet static
    address 10.0.2.1/24
    bridge-ports none
    bridge-stp off
    bridge-fd 0
Enter fullscreen mode Exit fullscreen mode

Activate the new interface with this command:

ifup vmbr1
Enter fullscreen mode Exit fullscreen mode

The newly created interface should appear in the Proxmox control panel.

Screenshot of network settings from the Proxmox control panel after adding a new interface

To give VMs internet access, you need to set up NAT.

In the /etc/sysctl.conf file, enable IP forwarding by adding this line:

net.ipv4.ip_forward=1
Enter fullscreen mode Exit fullscreen mode

Apply this setting right away with the following command:

sysctl -w net.ipv4.ip_forward=1
Enter fullscreen mode Exit fullscreen mode

Next, enable NAT in iptables with this command:

iptables -t nat -A POSTROUTING -s 10.0.2.0/24 -o vmbr0 -j MASQUERADE
Enter fullscreen mode Exit fullscreen mode

Here, vmbr0 is the name of the interface with internet access.

To preserve iptables settings across reboots, install the iptables-persistent package:

apt install iptables-persistent
Enter fullscreen mode Exit fullscreen mode

The network is now configured, and we can move on to the next step.

Preparing the Image

To create VMs similar to those in the cloud, you need a special Debian cloud image.

You’ll also need tools to modify virtual machine images:

apt install libguestfs-tools
Enter fullscreen mode Exit fullscreen mode

Here’s the command to download and prepare the image for use as a template:

wget https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2 \
&& virt-customize -a debian-12-generic-amd64.qcow2 --install qemu-guest-agent,net-tools \
&& virt-customize -a debian-12-generic-amd64.qcow2 --run-command "echo 'en_US.UTF-8 UTF-8' >> /etc/locale.gen" \
&& virt-customize -a debian-12-generic-amd64.qcow2 --run-command "locale-gen" \
&& virt-customize -a debian-12-generic-amd64.qcow2 --run-command "update-locale LANGUAGE=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LANG=en_US.UTF-8" \
&& virt-customize -a debian-12-generic-amd64.qcow2 --truncate /etc/machine-id
Enter fullscreen mode Exit fullscreen mode

In the --install argument, you can specify additional packages that will be automatically installed every time. The qemu-guest-agent package is required for the PVE agent to work.

It’s also important to run --truncate /etc/machine-id so that a unique identifier is generated for each copied VM.

Locale manipulations aren’t mandatory, but they help avoid pesky locale error messages when working via SSH.

Once the image is prepared, you need to create the actual VM template in Proxmox. You can do this with the following command:

QM_ID=9001 \
&& qm create $QM_ID --name "debian12-cloudinit" --memory 512 --cores 1 --net0 virtio,bridge=vmbr1 --machine q35 \
&& qm importdisk $QM_ID debian-12-generic-amd64.qcow2 local-lvm \
&& qm set $QM_ID --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-$QM_ID-disk-0 \
&& qm set $QM_ID --boot c --bootdisk scsi0 \
&& qm set $QM_ID --ide2 local-lvm:cloudinit \
&& qm set $QM_ID --serial0 socket --vga serial0 \
&& qm set $QM_ID --agent enabled=1 \
&& qm template $QM_ID
Enter fullscreen mode Exit fullscreen mode

The template ID (9001) and name ("debian12-cloudinit") can be changed to any arbitrary values. Resources and network settings will be redefined for each copied instance.

All preparations are done, and now you can use Terraform to manage virtual servers.

Terraform

To work with Proxmox via Terraform or OpenTofu, you can use the telmate/proxmox provider.

For security reasons, it’s recommended to create a separate user with limited permissions. However, in a home environment with access only from the local network, you can skip this step.

Contents of the provider.tf file:

terraform {
  required_providers {
    proxmox = {
      source  = "telmate/proxmox"
      version = "3.0.2-rc03" # latest version at the time of writing: 3.0.2-rc07
    }
  }
}

provider "proxmox" {
  pm_api_url      = "https://${var.pm_host}:8006/api2/json"
  pm_user         = "${var.pm_user}@pam"
  pm_password     = var.pm_password
  pm_tls_insecure = true
}
Enter fullscreen mode Exit fullscreen mode

Contents of the variables.tf file:

variable "pm_user" {}
variable "pm_password" { sensitive = true }
variable "pm_host" {}
variable "ssh_key" { default = "~/.ssh/id_rsa.pub" }
Enter fullscreen mode Exit fullscreen mode

After installing the provider with the terraform init command, you can describe the infrastructure just like when working with cloud providers.

You can refer to the previously created cloud template either by name using clone or by its ID using clone_id.

Example of creating a virtual server accessible only on the private network:

locals {
  debian_template    = "debian12-cloudinit"
  debian_template_id = 9001
}

resource "proxmox_vm_qemu" "internal-vm" {
  vmid        = 310
  name        = "internal-vm"
  target_node = "pve-1"
  clone       = local.debian_template
  onboot      = true
  cpu {
    cores = 1
  }
  memory           = 512
  boot             = "order=scsi0"
  scsihw           = "virtio-scsi-single"
  agent            = 1
  vm_state         = "running"
  automatic_reboot = true

  ciupgrade = false
  ipconfig0 = "ip=10.0.2.11/24,gw=10.0.2.1"
  skip_ipv6 = true
  ciuser    = "cloud"
  sshkeys   = file(var.ssh_key)

  serial {
    id = 0
  }

  disks {
    scsi {
      scsi0 {
        disk {
          storage = "local-lvm"
          size    = "8G"
        }
      }
    }
    ide {
      ide1 {
        cloudinit {
          storage = "local-lvm"
        }
      }
    }
  }

  network {
    id     = 0
    bridge = "vmbr1"
    model  = "virtio"
  }
}
Enter fullscreen mode Exit fullscreen mode

To create a server accessible on the local network, specify an additional bridge and its configuration:

# ...

resource "proxmox_vm_qemu" "jump-host" {
  vmid        = 110
  name        = "jump-host"
  target_node = "pve-1"
  clone_id     = local.debian_template_id
  # ...
  ipconfig0 = "ip=10.0.2.10/24,gw=10.0.2.1"
  ipconfig1 = "ip=dhcp"

  # ...

  network {
    id     = 0
    bridge = "vmbr1"
    model  = "virtio"
  }

  network {
    id      = 1
    bridge  = "vmbr0"
    model   = "virtio"
    macaddr = "AA:24:11:75:01:10"
  }
}
Enter fullscreen mode Exit fullscreen mode

The VM’s MAC address is fixed to create static IPs on the DHCP server side. The VMs created above are accessible via SSH through the jump host using the cloud user and the SSH key specified in the
Terraform variables.

Working with a PVE Cluster

To manage cluster infrastructure, you can connect to any node’s API. In PVE, there’s no “master” host. By connecting to one node’s API, you can manage resources on other nodes.

However, if Terraform state includes resources on a powered-off node, TF will fail with an error. Therefore, when using Terraform, all cluster nodes with managed resources must be available.

Quorum Error

To simulate distributed systems for testing, I use a cluster of two hosts. Most of the time, I don’t need both PVE hosts, and only one is powered on.

By default, this approach leads to a “cluster not ready - no quorum?” error and makes Proxmox completely unusable.

To allow the cluster to work with just one node, run this command:

pvecm expect 1
Enter fullscreen mode Exit fullscreen mode

To make this setting persistent, add three lines to the /etc/pve/corosync.conf file:

# ...

quorum {
  # ... defaults

  expected_votes: 1
  two_node: 0
  wait_for_all: 0
}

# ...
Enter fullscreen mode Exit fullscreen mode

Working with Templates

Since the VM template image is stored on the local file system, you need to create it on each host.

Proxmox expects unique server and template IDs at the cluster level, so:

  • when preparing a template on each host, you must change its ID;
  • when using clone_id in Terraform, you must specify the template ID located on the specified target_node.

RU translation

Top comments (0)