DEV Community

Aurelia Peters
Aurelia Peters

Posted on • Updated on

Setting up the home lab: Terraform

When I worked for CBS, I discovered Terraform, which is tool that allows you do define infrastructure as code ("IaC"). I just recently purchased a home lab server (the details of how I have that set up will be discussed in a future article) and installed Proxmox VE on it. I'd like to improve my DevOps skills, so I thought I'd play with Terraform for my home lab. Eventually I'd like to get my entire home infrastructure represented as Terraform templates.

To get Terraform set up on my home lab server (which I call "The Beast" XD) I'll be following the excellent tutorial given by Tanner Cude. I'll be reproducing a lot of it here just in case that site goes down. I'll also update the templates Tanner provides for the current version (v3.0.1-rc3) of the Telmate proxmox provider

Note that I'll be doing the local machine portion on my desktop machine, which runs Ubuntu Linux. Tanner's instructions are for a MacBook, but things are pretty similar.

To begin with, I've installed Terraform on my desktop machine following the installation instructions provided by Hashicorp.

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
Enter fullscreen mode Exit fullscreen mode

I then SSH'd into my home lab server and set up a role for the Terraform worker to assume:

pveum role add terraform-role -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt SDN.Use"
Enter fullscreen mode Exit fullscreen mode

You can also do this via the web UI, but it's easier (IMO) to do via the command line.

Next, and again on my home lab server, I'll create the terraform user, associate it with the role I created above, and get an authentication token:

pveum user add terraform@pve 
pveum aclmod / -user terraform@pve -role terraform-role
pveum user token add terraform@pve terraform-token --privsep=0
Enter fullscreen mode Exit fullscreen mode

And I get a response similar to the below. If you're following along, make sure to save the access token (the UUID in the last line of the response); you won't be able to retrieve it later.

┌──────────────┬──────────────────────────────────────┐
│ key          │ value                                │
╞══════════════╪══════════════════════════════════════╡
│ full-tokenid │ terraform@pve!terraform-token        │
├──────────────┼──────────────────────────────────────┤
│ info         │ {"privsep":"0"}                      │
├──────────────┼──────────────────────────────────────┤
│ value        │ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx │
└──────────────┴──────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Now that the basics are set up on Proxmox, it's time to go back to my local machine.

I created a Git repo for my Terraform code called home_infra and added a .gitignore file so that secrets like the token ID and access token are only stored locally. Later on I might store them as repository secrets, but I'll have to see how that works with the Terraform command line. For now they're fine in a .tfvars file, and if I want to work on my infrastructure from another machine, I can always SSH into my desktop from there or copy the file.

Now I'm going to set up the public Terraform vars that I'll use when spinning up a VM. I'll write them to a file called vars.tf:

# TF vars for spinning up VMs

# Set your public SSH key here
variable "ssh_key" {
  default = "ssh-ed25519 <My SSH Public Key> aurelia@desktop"
}
#Establish which Proxmox host you'd like to spin a VM up on
variable "proxmox_host" {
    default = "thebeast" 
}
#Specify which template name you'd like to use
variable "template_name" {
    default = "ubuntu-2404-template2"
}
#Establish which nic you would like to utilize
variable "nic_name" {
    default = "vmbr0"
}

# I don't have VLANs set up
# #Establish the VLAN you'd like to use 
# variable "vlan_num" {
#     default = "place_vlan_number_here"
# }
#Provide the url of the host you would like the API to communicate on.
#It is safe to default to setting this as the URL for what you used
#as your `proxmox_host`, although they can be different
variable "api_url" {
    default = "https://thebeast.scurrilous.foo:8006/api2/json"
}
#Blank var for use by terraform.tfvars
variable "token_secret" {
}
#Blank var for use by terraform.tfvars
variable "token_id" {
}

Enter fullscreen mode Exit fullscreen mode

Finally, I'll create main.tf to define my infrastructure, and I'll put two test VMs in just for grins. Note how this references the variables in both vars.tf and terraform.tfvars.

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      #latest version as of 16 July 2024
      version = "3.0.1-rc3"
    }
  }
}

provider "proxmox" {
  # References our vars.tf file to plug in the api_url 
  pm_api_url = var.api_url
  # References our secrets.tfvars file to plug in our token_id
  pm_api_token_id = var.token_id
  # References our secrets.tfvars to plug in our token_secret 
  pm_api_token_secret = var.token_secret
  # Default to `true` unless you have TLS working within your pve setup 
  pm_tls_insecure = true
}

resource "proxmox_vm_qemu" "cloudinit-test1" {
    name = "terraform-test-vm"
    desc = "A test for using terraform and cloudinit"

    # Node name has to be the same name as within the cluster
    # this might not include the FQDN
    target_node = var.proxmox_host

    # The template name to clone this vm from
    clone = var.template_name

    # Activate QEMU agent for this VM
    agent = 1

    os_type = "cloud-init"
    cores = 2
    sockets = 1
    vcpus = 0
    cpu = "host"
    memory = 2048
    scsihw = "virtio-scsi-single"

    # Setup the disk
    disks {
        ide {
            ide3 {
                cloudinit {
                    storage = "local-lvm"
                }
            }
        }
        virtio {
          virtio0 {
            disk {
              size = "32G"
              storage = "containers-and-vms"
              discard = true
              iothread        = true
              # Can't emulate SSDs in virtio
            }
          }
        }
    }

    network {
      model = "virtio"
      bridge = var.nic_name
    }

    # Setup the ip address using cloud-init.
    boot = "order=virtio0"
    # Keep in mind to use the CIDR notation for the ip.
    ipconfig0 = "ip=dhcp,ip6=dhcp"
    skip_ipv6 = true
    sshkeys = var.ssh_key
}
resource "proxmox_vm_qemu" "cloudinit-test2" {
    name = "terraform-test-vm-2"
    desc = "A test for using terraform and cloudinit"

    # Node name has to be the same name as within the cluster
    # this might not include the FQDN
    target_node = var.proxmox_host

    # The template name to clone this vm from
    clone = var.template_name

    # Activate QEMU agent for this VM
    agent = 1

    os_type = "cloud-init"
    cores = 2
    sockets = 1
    vcpus = 0
    cpu = "host"
    memory = 2048
    scsihw = "virtio-scsi-single"

    # Setup the disk
    disks {
        ide {
            ide3 {
                cloudinit {
                    storage = "local-lvm"
                }
            }
        }
        virtio {
          virtio0 {
            disk {
              size = "32G"
              storage = "containers-and-vms"
              discard = true
              iothread        = true
              # Can't emulate SSDs in virtio
            }
          }
        }
    }

    network {
      model = "virtio"
      bridge = var.nic_name
    }

    # Setup the ip address using cloud-init.
    boot = "order=virtio0"
    # Keep in mind to use the CIDR notation for the ip.
    ipconfig0 = "ip=dhcp,ip6=dhcp"
    skip_ipv6 = true

    sshkeys = var.ssh_key
}
Enter fullscreen mode Exit fullscreen mode

Now that I've got my main Terraform template, I can initialize Terraform and see if this works. Go-go terraform init!

aurelia@desktop:~/work/home_infra$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding telmate/proxmox versions matching "3.0.1-rc3"...
- Installing telmate/proxmox v3.0.1-rc3...
- Installed telmate/proxmox v3.0.1-rc3 (self-signed, key ID A9EBBE091B35AFCE)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Enter fullscreen mode Exit fullscreen mode

Next we'll run terraform plan to see if we've got our template right:

aurelia@desktop:~/work/home_infra$ terraform plan -out plan.out
proxmox_vm_qemu.cloudinit-test2: Refreshing state... [id=thebeast/qemu/100]
proxmox_vm_qemu.cloudinit-test1: Refreshing state... [id=thebeast/qemu/101]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.cloudinit-test1 will be created
  + resource "proxmox_vm_qemu" "cloudinit-test1" {
      + additional_wait        = 5
      + agent                  = 1
      + automatic_reboot       = true
      + balloon                = 0
      + bios                   = "seabios"
      + boot                   = "order=virtio0"
      + bootdisk               = (known after apply)
      + clone                  = "ubuntu-2404-template2"
      + clone_wait             = 10
      + cores                  = 2
      + cpu                    = "host"
      + default_ipv4_address   = (known after apply)
      + default_ipv6_address   = (known after apply)
      + define_connection_info = true
      + desc                   = "A test for using terraform and cloudinit"
      + force_create           = false
      + full_clone             = true
      + hotplug                = "network,disk,usb"
      + id                     = (known after apply)
      + ipconfig0              = "ip=dhcp,ip6=dhcp"
      + kvm                    = true
      + linked_vmid            = (known after apply)
      + memory                 = 2048
      + name                   = "terraform-test-vm"
      + nameserver             = (known after apply)
      + onboot                 = false
      + os_type                = "cloud-init"
      + protection             = false
      + reboot_required        = (known after apply)
      + scsihw                 = "virtio-scsi-single"
      + searchdomain           = (known after apply)
      + skip_ipv4              = false
      + skip_ipv6              = true
      + sockets                = 1
      + ssh_host               = (known after apply)
      + ssh_port               = (known after apply)
      + sshkeys                = "ssh-ed25519 <My SSH Public Key> aurelia@desktop"
      + tablet                 = true
      + tags                   = (known after apply)
      + target_node            = "thebeast"
      + unused_disk            = (known after apply)
      + vcpus                  = 0
      + vm_state               = "running"
      + vmid                   = (known after apply)

      + disks {
          + ide {
              + ide3 {
                  + cloudinit {
                      + storage = "local-lvm"
                    }
                }
            }
          + virtio {
              + virtio0 {
                  + disk {
                      + backup               = true
                      + discard              = true
                      + format               = "raw"
                      + id                   = (known after apply)
                      + iops_r_burst         = 0
                      + iops_r_burst_length  = 0
                      + iops_r_concurrent    = 0
                      + iops_wr_burst        = 0
                      + iops_wr_burst_length = 0
                      + iops_wr_concurrent   = 0
                      + iothread             = true
                      + linked_disk_id       = (known after apply)
                      + mbps_r_burst         = 0
                      + mbps_r_concurrent    = 0
                      + mbps_wr_burst        = 0
                      + mbps_wr_concurrent   = 0
                      + size                 = "32G"
                      + storage              = "containers-and-vms"
                    }
                }
            }
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }

      + smbios (known after apply)
    }

  # proxmox_vm_qemu.cloudinit-test2 will be created
  + resource "proxmox_vm_qemu" "cloudinit-test2" {
      + additional_wait        = 5
      + agent                  = 1
      + automatic_reboot       = true
      + balloon                = 0
      + bios                   = "seabios"
      + boot                   = "order=virtio0"
      + bootdisk               = (known after apply)
      + clone                  = "ubuntu-2404-template2"
      + clone_wait             = 10
      + cores                  = 2
      + cpu                    = "host"
      + default_ipv4_address   = (known after apply)
      + default_ipv6_address   = (known after apply)
      + define_connection_info = true
      + desc                   = "A test for using terraform and cloudinit"
      + force_create           = false
      + full_clone             = true
      + hotplug                = "network,disk,usb"
      + id                     = (known after apply)
      + ipconfig0              = "ip=dhcp,ip6=dhcp"
      + kvm                    = true
      + linked_vmid            = (known after apply)
      + memory                 = 2048
      + name                   = "terraform-test-vm-2"
      + nameserver             = (known after apply)
      + onboot                 = false
      + os_type                = "cloud-init"
      + protection             = false
      + reboot_required        = (known after apply)
      + scsihw                 = "virtio-scsi-single"
      + searchdomain           = (known after apply)
      + skip_ipv4              = false
      + skip_ipv6              = true
      + sockets                = 1
      + ssh_host               = (known after apply)
      + ssh_port               = (known after apply)
      + sshkeys                = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJDjuXJsm20610XQGaGgsagEupVlfzYMorJXrNo1u7Gx took@oscar"
      + tablet                 = true
      + tags                   = (known after apply)
      + target_node            = "thebeast"
      + unused_disk            = (known after apply)
      + vcpus                  = 0
      + vm_state               = "running"
      + vmid                   = (known after apply)

      + disks {
          + ide {
              + ide3 {
                  + cloudinit {
                      + storage = "local-lvm"
                    }
                }
            }
          + virtio {
              + virtio0 {
                  + disk {
                      + backup               = true
                      + discard              = true
                      + format               = "raw"
                      + id                   = (known after apply)
                      + iops_r_burst         = 0
                      + iops_r_burst_length  = 0
                      + iops_r_concurrent    = 0
                      + iops_wr_burst        = 0
                      + iops_wr_burst_length = 0
                      + iops_wr_concurrent   = 0
                      + iothread             = true
                      + linked_disk_id       = (known after apply)
                      + mbps_r_burst         = 0
                      + mbps_r_concurrent    = 0
                      + mbps_wr_burst        = 0
                      + mbps_wr_concurrent   = 0
                      + size                 = "32G"
                      + storage              = "containers-and-vms"
                    }
                }
            }
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }

      + smbios (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Enter fullscreen mode Exit fullscreen mode

The plan looks good. Let's apply it!

aurelia@desktop:~/work/home_infra$ terraform apply plan.out
proxmox_vm_qemu.cloudinit-test1: Creating...
proxmox_vm_qemu.cloudinit-test2: Creating...
proxmox_vm_qemu.cloudinit-test1: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit-test2: Creation complete after 51s [id=thebeast/qemu/101]
proxmox_vm_qemu.cloudinit-test1: Creation complete after 55s [id=thebeast/qemu/100]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Enter fullscreen mode Exit fullscreen mode

And sure enough, if I go into Proxmox, I can see both VMs:

Test VM #1
Test VM #2

Some notes on the process: I had to set up the template VM (ubuntu-2404-template2) before I could clone any VMs from it. And this VM needs to be, with regard to Proxmox, a full VM and not a (Proxmox) template. Also, make sure the VMs you clone from this template VM via Terraform use the same block driver (i.e. scsi, ide, or virtio) for the boot drive as the template VM. I initially created a template VM with a SCSI boot drive and tried to create a clone in Terraform using Virtio, and the cloned VM would just endlessly reboot with the message "No valid boot device" in the console. This may be my inexperience, however, and as always, YMMV.

And there you have it! In my next article I'll show you how to provision your VMs using Cloud-Init

Top comments (0)