DEV Community

Cover image for Build a Local Kubernetes Cluster in Minutes with Terraform and Multipass
todoroff
todoroff

Posted on

Build a Local Kubernetes Cluster in Minutes with Terraform and Multipass

Are you looking for a way to spin up a lightweight, throwaway Kubernetes cluster on your local machine without the overhead of Docker Desktop or Minikube? Or maybe you want to simulate a multi-node environment to test node affinity and failover?

In this article, we'll show you how to build a multi-node K3s cluster completely from code using the Multipass Terraform Provider.

Why This Stack?

  • Multipass: Canonical's lightweight VM manager for Linux, Windows, and macOS. It spins up Ubuntu instances in seconds.
  • Terraform: The industry standard for Infrastructure as Code (IaC). It manages the lifecycle, dependencies, and configuration of your VMs.
  • K3s: A highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances.

By combining these, you get a reproducible, codified local lab environment that you can spin up and tear down with a single command.

The Project: A 3-Node K3s Cluster

We are going to build:

  1. 1 Master Node: Runs the K3s control plane.
  2. 2 Worker Nodes: Join the cluster automatically.
  3. Automatic Wiring: Terraform will handle passing the master's IP and a shared secret to the workers.

Prerequisites

  1. Install Multipass
  2. Install Terraform
  3. Initialize a new directory for your project:

    mkdir k3s-lab && cd k3s-lab
    

The Terraform Configuration

Create a file named main.tf and paste the following code. We're using the todoroff/multipass provider to manage our VMs.

terraform {
  required_providers {
    multipass = {
      source  = "todoroff/multipass"
    }
  }
}

provider "multipass" {
  # Increase timeout for image downloads and initialization
  command_timeout = 600 
}

# 1. Define a shared secret for the cluster
locals {
  k3s_token = "my-super-secret-shared-token-12345"

  # Cloud-init script for the master node
  # Installs K3s server, creates a wrapper script for kubectl, and sets the token
  master_cloud_init = <<-EOT
    #cloud-config
    package_update: true
    package_upgrade: true
    write_files:
      - path: /usr/local/bin/k3s-kubectl-wrapper
        permissions: '0755'
        content: |
          #!/bin/sh
          sudo k3s kubectl "$@"
    runcmd:
      - curl -sfL https://get.k3s.io | K3S_TOKEN=${local.k3s_token} sh -s - server --cluster-init
  EOT
}

# 2. Create the Master Node
resource "multipass_instance" "k3s_master" {
  name   = "k3s-master"
  cpus   = 2
  memory = "2G"
  disk   = "10G"
  image  = "jammy" # Ubuntu 22.04 LTS

  cloud_init = local.master_cloud_init
}

# 3. Create Worker Nodes
# These depend on the master because they need its IP address
resource "multipass_instance" "k3s_worker" {
  count  = 2
  name   = "k3s-worker-${count.index + 1}"
  cpus   = 1
  memory = "1G"
  disk   = "5G"
  image  = "jammy"

  # Cloud-init script for workers
  # Uses the master's IP (from Terraform state) to join the cluster
  cloud_init = <<-EOT
    #cloud-config
    package_update: true
    runcmd:
      - curl -sfL https://get.k3s.io | K3S_URL=https://${multipass_instance.k3s_master.ipv4[0]}:6443 K3S_TOKEN=${local.k3s_token} sh -
  EOT

  depends_on = [multipass_instance.k3s_master]
}

# 4. Create a convenient alias to run kubectl from your host
# We use a wrapper script to handle argument passing correctly via alias
resource "multipass_alias" "kubectl" {
  name     = "k3s-kubectl"
  instance = multipass_instance.k3s_master.name
  command  = "/usr/local/bin/k3s-kubectl-wrapper"
}

# 5. Output the node IPs
output "cluster_nodes" {
  value = {
    master  = multipass_instance.k3s_master.ipv4
    workers = multipass_instance.k3s_worker[*].ipv4
  }
}
Enter fullscreen mode Exit fullscreen mode

How It Works

  1. cloud_init Magic: We use Terraform's local variables to construct a cloud-init script. This script runs on first boot.
    • On the master, it downloads and installs K3s in server mode. It also creates a helper script /usr/local/bin/k3s-kubectl-wrapper to make running kubectl commands easier via alias.
    • On the workers, it installs K3s and immediately joins the cluster using the K3S_URL environment variable.
  2. Dynamic Configuration: Notice ${multipass_instance.k3s_master.ipv4[0]} in the worker's cloud-init. Terraform waits for the master to be created, grabs its first IPv4 address, injects it into the worker's configuration, and then creates the workers. No manual copy-pasting required!
  3. Aliases: The multipass_alias resource creates a k3s-kubectl command on your host machine that invokes the wrapper script on the master node.

Deploying the Lab

  1. Initialize Terraform:

    terraform init
    
  2. Apply the Configuration:

    terraform apply
    

    Type yes when prompted.

    Terraform will spin up the VMs. It might take a minute or two for the VMs to boot and for K3s to install.

Using Your Cluster

Once Terraform finishes, wait about 60 seconds for the cloud-init scripts to complete installation inside the VMs.

  1. Check the Nodes:
    You can run kubectl commands directly from your terminal using the alias we created:

    multipass k3s-kubectl get nodes
    

    You should see something like:

    NAME           STATUS   ROLES                       AGE   VERSION
    k3s-master     Ready    control-plane,etcd,master   2m    v1.28.2+k3s1
    k3s-worker-1   Ready    <none>                      1m    v1.28.2+k3s1
    k3s-worker-2   Ready    <none>                      1m    v1.28.2+k3s1
    
  2. Deploy a Test Workload:
    Deploy a simple Nginx server to verify everything is working:

    multipass k3s-kubectl create deployment nginx --image=nginx
    multipass k3s-kubectl expose deployment nginx --port=80 --type=NodePort
    

Cleanup

When you're done with your lab, you don't need to manually delete VMs or clean up files. Just run:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

This will stop and delete all the Multipass instances and remove the aliases, leaving your machine clean.

Conclusion

The Multipass Terraform Provider turns your local machine into a flexible cloud environment. Whether you're testing Kubernetes, setting up a complex microservices mesh, or just need a clean Linux sandbox, you can define it all in code.

Check out the provider documentation for more resources like multipass_mount and multipass_snapshot to take your local labs to the next level!

Or some of the examples in the provider repository: https://github.com/todoroff/terraform-provider-multipass

Top comments (0)