<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aurelia Peters</title>
    <description>The latest articles on DEV Community by Aurelia Peters (@popefelix).</description>
    <link>https://dev.to/popefelix</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/popefelix"/>
    <language>en</language>
    <item>
      <title>Setting Up The Home Lab: Setting up Kubernetes Using Ansible</title>
      <dc:creator>Aurelia Peters</dc:creator>
      <pubDate>Thu, 08 Aug 2024 22:04:12 +0000</pubDate>
      <link>https://dev.to/popefelix/setting-up-the-home-lab-setting-up-kubernetes-using-ansible-3ji1</link>
      <guid>https://dev.to/popefelix/setting-up-the-home-lab-setting-up-kubernetes-using-ansible-3ji1</guid>
      <description>&lt;p&gt;In my &lt;a href="https://dev.to/popefelix/setting-up-the-home-lab-terraform-and-cloud-init-fge"&gt;previous article&lt;/a&gt; I went over how to set up VMs in &lt;a href="https://proxmox.com/en/proxmox-virtual-environment/overview" rel="noopener noreferrer"&gt;Proxmox VE&lt;/a&gt; using &lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; to deploy the VMs and &lt;a href="https://cloud-init.io/" rel="noopener noreferrer"&gt;Cloud-Init&lt;/a&gt; to provision them. In this article I'll discuss using &lt;a href="https://www.ansible.com/" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; playbooks to do further provisioning of VMs.&lt;/p&gt;

&lt;p&gt;Since I want to play with &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; anyway, I'll set up a k8s cluster. It will have 2 master and 4 worker nodes. Each VM will have 4 cores, 8 GB of RAM, a 32 GB root virtual disk, and a 250 GB data virtual disk for &lt;a href="https://longhorn.io/" rel="noopener noreferrer"&gt;Longhorn&lt;/a&gt; volumes. I'll create an &lt;code&gt;ansible&lt;/code&gt; user via cloud-init and allow access via SSH.&lt;/p&gt;

&lt;p&gt;For the purposes of this article, I'm going to run Ansible separately, rather than from within Terraform. As soon as I figure out how to run the two together, I'll post a new article about that. XD&lt;/p&gt;

&lt;p&gt;Anyway, let's get started. To begin with I'll add some configuration variables to my &lt;code&gt;vars.tf&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "k8s_worker_pve_node" {
  description = "Proxmox node(s) to target"
  type = list(string)
  sensitive = false
  default = ["thebeast"]
}

variable "k8s_master_count" {
  description = "Number of k8s masters to create"
  default = 3 # I need an odd number of masters for etcd
}

variable "k8s_worker_count" {
  description = "Number of k8s workers to create"
  default = 3
}

variable "k8s_master_cores" {
  description = "Number of CPU cores for each k8s master"
  default = 4
}

variable "k8s_master_mem" {
  description = "Memory (in KB) to assign to each k8s master"
  default = 8192
}

variable "k8s_worker_cores" {
  description = "Number of CPU cores for each k8s worker"
  default = 4
}

variable "k8s_worker_mem" {
  description = "Memory (in KB) to assign to each k8s worker"
  default = 8192
}

variable "k8s_user" {
  description = "Used by Ansible"
  default = "ansible"
}

variable "k8s_nameserver" {
  default = "192.168.1.9"
}

variable "k8s_nameserver_domain" {
  default = "scurrilous.foo"
}

variable "k8s_gateway" {
  default = "192.168.1.1"
}

variable "k8s_master_ip_addresses" {
  type = list(string)
  default = ["192.168.1.80/24", "192.168.1.81/24", "192.168.1.82/24"]
}

variable "k8s_worker_ip_addresses" {
  type = list(string)
  default = ["192.168.1.90/24", "192.168.1.91/24", "192.168.1.92/24"]
}

variable "k8s_node_root_disk_size" {
  default = "32G"
}

variable "k8s_node_data_disk_size" {
  default = "250G"
}

variable "k8s_node_disk_storage" {
  default = "containers-and-vms"
}

variable "k8s_template_name" {
  default = "ubuntu-2404-base"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next I'll set up my k8s master and worker nodes in Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "proxmox_vm_qemu" "k8s_master" {
    count = var.k8s_master_count
    name = "k8s-master-${count.index}"
    desc = "K8S Master Node"
    ipconfig0 = "gw=${var.k8s_gateway},ip=${var.k8s_master_ip_addresses[count.index]}"
    target_node = var.k8s_pve_node
    onboot = true
    clone = var.k8s_template_name
    agent = 1
    ciuser = var.k8s_user
    memory = var.k8s_master_mem
    cores = var.k8s_master_cores
    nameserver = var.k8s_nameserver
    os_type = "cloud-init"
    cpu = "host"
    scsihw = "virtio-scsi-single"
    tags="k8s,ubuntu,k8s_master"

    # Setup the disk
    disks {
        ide {
            ide2 {
                cloudinit {
                    storage = "containers-and-vms"
                }
            }
        }
        scsi {
            scsi0 {
                disk {
                  size     = var.k8s_node_root_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
            scsi1 {
                disk {
                  size     = var.k8s_node_data_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
        }
    }

    network {
        model = "virtio"
        bridge = var.nic_name
          tag = -1
    }

    # Setup the ip address using cloud-init.
    boot = "order=scsi0"
    skip_ipv6 = true

    lifecycle {
      ignore_changes = [
        disks,
        target_node,
        sshkeys,
        network
      ]
    }
}

resource "proxmox_vm_qemu" "k8s_workers" {
    count = var.k8s_worker_count
    name = "k8s-worker-${count.index}"
    desc = "K8S Master Node"
    ipconfig0 = "gw=${var.k8s_gateway},ip=${var.k8s_worker_ip_addresses[count.index]}"
    target_node = var.k8s_pve_node
    onboot = true
    clone = var.k8s_template_name
    agent = 1
    ciuser = var.k8s_user
    memory = var.k8s_worker_mem
    cores = var.k8s_worker_cores
    nameserver = var.k8s_nameserver
    os_type = "cloud-init"
    cpu = "host"
    scsihw = "virtio-scsi-single"
    sshkeys = file("${path.module}/files/${var.k8s_ssh_key_file}")
    tags="k8s,ubuntu,k8s_worker"

    # Setup the disk
    disks {
        ide {
            ide2 {
                cloudinit {
                    storage = "containers-and-vms"
                }
            }
        }
        scsi {
            scsi0 {
                disk {
                  size     = var.k8s_node_root_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
            scsi1 {
                disk {
                  size     = var.k8s_node_data_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
        }
    }

    network {
        model = "virtio"
        bridge = var.nic_name
    tag = -1
    }

    # Setup the ip address using cloud-init.
    boot = "order=scsi0"
    skip_ipv6 = true

    lifecycle {
      ignore_changes = [
        disks,
        target_node,
        sshkeys,
        network
      ]
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A quick &lt;code&gt;terraform apply&lt;/code&gt; and I have all of my VMs set up. Next, since I've already installed Ansible on my local machine, I'll set up Kubernetes using &lt;a href="https://kubespray.io/#/" rel="noopener noreferrer"&gt;Kubespray&lt;/a&gt; following Pradeep Kumar's excellent &lt;a href="https://www.linuxtechi.com/install-kubernetes-using-kubespray/" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First I set up an inventory file, like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;all:
  hosts:
    k8s-master-0:
    k8s-master-1:
    k8s-master-2:
    k8s-worker-0:
    k8s-worker-1:
    k8s-worker-2:
  vars:
    ansible_user: ansible
    ansible_python_interpreter: /usr/bin/python3
  children:
    kube_control_plane:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
    kube_node:
      hosts:
        k8s-worker-0:
        k8s-worker-1:
        k8s-worker-2:
    etcd:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note here that I've also added DNS entries to my local nameserver for these hosts. I could also have used IP addresses instead of the hosts. In a later revision of this configuration, I'll try setting up resource discovery via the &lt;a href="https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_inventory.html" rel="noopener noreferrer"&gt;Proxmox inventory source for Ansible&lt;/a&gt;, but for now I'll hardcode things.&lt;/p&gt;

&lt;p&gt;Note also that I've set the &lt;code&gt;ansible_user&lt;/code&gt; variable in this inventory. That's important to make sure that Ansible uses the service account that I already set up in Terraform. I've also set the location of the Ansible Python interpreter (via the &lt;code&gt;ansible_python_interpreter&lt;/code&gt; variable) so that I don't get bombarded with warnings from Ansible about using the discovered Python interpreter.&lt;/p&gt;

&lt;p&gt;So now that I've got my hosts deployed, it's time to set up Kubernetes. I've cloned the &lt;a href="https://github.com/kubernetes-sigs/kubespray/" rel="noopener noreferrer"&gt;Kubespray GitHub repo&lt;/a&gt; and now I'll run the &lt;code&gt;cluster.yml&lt;/code&gt; playbook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook -i ../ansible/inventory/k8s-cluster/hosts.yml --become --become-user=root cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After some time (I think it took a good half hour, all told) Kubernetes is installed, ready for me to deploy my applications.&lt;/p&gt;

&lt;p&gt;So now I have a working k8s installation on my home lab, but there were several steps involved in getting it set up. It sure would be nice if I could deploy and provision everything in one fell swoop. I'll discuss that next time. I'd also like to not have to SSH into one of my master nodes in order to run &lt;code&gt;kubectl&lt;/code&gt;. &lt;/p&gt;

</description>
      <category>ansible</category>
      <category>terraform</category>
      <category>proxmox</category>
      <category>devops</category>
    </item>
    <item>
      <title>Setting Up The Home Lab: Terraform and Cloud-Init</title>
      <dc:creator>Aurelia Peters</dc:creator>
      <pubDate>Wed, 31 Jul 2024 23:10:19 +0000</pubDate>
      <link>https://dev.to/popefelix/setting-up-the-home-lab-terraform-and-cloud-init-fge</link>
      <guid>https://dev.to/popefelix/setting-up-the-home-lab-terraform-and-cloud-init-fge</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/popefelix/setting-up-the-home-lab-terraform-44b3"&gt;my last article&lt;/a&gt; I talked about getting &lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; set up on &lt;a href="https://proxmox.com/en/proxmox-virtual-environment/overview" rel="noopener noreferrer"&gt;Proxmox VE&lt;/a&gt;. In this article I want to talk about how I got &lt;a href="https://cloud-init.io/" rel="noopener noreferrer"&gt;Cloud-Init&lt;/a&gt; set up to use with my Terraform templates.&lt;/p&gt;

&lt;p&gt;To begin with, I needed a cloud-init base VM. While I could use the &lt;a href="https://cloud-images.ubuntu.com/" rel="noopener noreferrer"&gt;cloud image&lt;/a&gt; that Ubuntu provides, I found a &lt;a href="https://gtgb.io/2022/07/23/proxmox-vm-templating/" rel="noopener noreferrer"&gt;nifty article&lt;/a&gt; that shows you how to roll your own base image.&lt;/p&gt;

&lt;p&gt;NOTE: The &lt;a href="https://pve.proxmox.com/wiki/Cloud-Init_Support" rel="noopener noreferrer"&gt;Proxmox VE cloud-init documentation&lt;/a&gt; suggests adding a serial console next. I have found that not to be necessary with the Ubuntu cloud image, so I'm not going to do it.&lt;/p&gt;

&lt;p&gt;Now that we've got the base template set up (turns out I was mistaken in my last post when I said it needed to be a VM and not a template) let's set up an actual VM. I'll have a single virtual Ethernet interface that gets its IP address via DHCP, 32 GB of virtual disk, 2048 GB of RAM, and 2 processor cores. &lt;/p&gt;

&lt;p&gt;Note here that I've broken my Terraform config into several files to make it more manageable. As long as all of the Terraform files (i.e. the ones ending in &lt;code&gt;.tf&lt;/code&gt; or &lt;code&gt;.tfvars&lt;/code&gt;) are in the same directory, Terraform will process them in the same way as if they were one big file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# provider.tf - This is where I define my providers

terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      #latest version as of 16 July 2024
      version = "3.0.1-rc3"
    }
  }
}

provider "proxmox" {
  # References our vars.tf file to plug in the api_url
  pm_api_url = "https://${var.proxmox_host}:8006/api2/json"
  # Provided in a file, secrets.tfvars containing secret terraform variables
  pm_api_token_id = var.token_id 
  # Also provided in secrets.tfvars
  pm_api_token_secret = var.token_secret
  # Defined in vars.tf
  pm_tls_insecure = var.pm_tls_insecure
  pm_log_enable = true
  # this is useful for logging what Terraform is doing
  pm_log_file   = "terraform-plugin-proxmox.log"
  pm_log_levels = {
    _default    = "debug"
    _capturelog = ""
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# cloud-init.tf - This is where I store cloud-init configuration

# Source the Cloud Init Config file. NB: This file should be located 
# in the "files" directory under the directory you have your Terraform
# files in.
data "template_file" "cloud_init_test1" { 
  template  = "${file("${path.module}/files/test1.cloud_config")}"

  vars = {
    ssh_key = file("~/.ssh/id_ed25519.pub")
    hostname = var.vm_name
    domain = "scurrilous.foo"
  }
}

# Create a local copy of the file, to transfer to Proxmox.
resource "local_file" "cloud_init_test1" {
  content   = data.template_file.cloud_init_test1.rendered
  filename  = "${path.module}/files/user_data_cloud_init_test1.cfg"
}

# Transfer the file to the Proxmox Host
resource "null_resource" "cloud_init_test1" {
  connection {
    type    = "ssh"
    user    = "root"
    private_key = file("~/.ssh/id_ed25519")
    host    = var.proxmox_host
  }

  provisioner "file" {
    source       = local_file.cloud_init_test1.filename
    destination  = "/var/lib/vz/snippets/cloud_init_test1.yml"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# main.tf - This is where I define the VMs I want to deploy with Terraform

resource "proxmox_vm_qemu" "cloudinit-test" {
    name = var.vm_name
    desc = "Testing Terraform and cloud-init"
    depends_on = [ null_resource.cloud_init_test1 ]
    # Node name has to be the same name as within the cluster
    # this might not include the FQDN
    target_node = var.proxmox_host

    # The template name to clone this vm from
    clone = var.template_name
    # Activate QEMU agent for this VM
    agent = 1

    os_type = "cloud-init"
    cores = 2
    sockets = 1
    vcpus = 0
    cpu = "host"
    memory = 2048
    scsihw = "virtio-scsi-single"

    # Setup the disk
    disks {
        ide {
            ide2 {
                cloudinit {
                    storage = "containers-and-vms"
                }
            }
        }
        scsi {
            scsi0 {
                disk {
                  size     = "32G"
                  storage  = "containers-and-vms"
                  discard  = true
                  iothread = true
                }
            }
        }
    }

    network {
        model = "virtio"
        bridge = var.nic_name
    tag = -1
    }

    # Setup the ip address using cloud-init.
    boot = "order=scsi0"
    # Keep in mind to use the CIDR notation for the ip.
    ipconfig0 = "ip=192.168.1.80/24,gw=192.168.1.1,ip6=dhcp"
    skip_ipv6 = true

    lifecycle {
      ignore_changes = [
        ciuser,
        sshkeys,
        network
      ]
    }
    cicustom = "user=local:snippets/cloud_init_test1.yml"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In addition to the Terraform files, we also need the cloud-config file (&lt;code&gt;cloud_init_test1.yml&lt;/code&gt;) that we're referencing in &lt;code&gt;main.tf&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt; If you specify a value for &lt;code&gt;cicustom&lt;/code&gt; as I did here, the &lt;code&gt;ciuser&lt;/code&gt; and &lt;code&gt;sshkeys&lt;/code&gt; fields in the template definition (e.g. &lt;code&gt;main.tf&lt;/code&gt;) are &lt;em&gt;ignored&lt;/em&gt; in favor of whatever is in the cloud-config file, &lt;em&gt;even when nothing is there&lt;/em&gt;. This also trumps whatever is in the base template. You &lt;em&gt;must&lt;/em&gt; specify your SSH keys in your cloud-config file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#cloud-config

ssh_authorized_keys:
  - &amp;lt;ssh public key 1&amp;gt;
  - &amp;lt;ssh public key 2&amp;gt;

runcmd:
  - apt-get update
  - apt-get install -y nginx

write_files:
  - content: |
      #!/bin/bash
      echo "ZOMBIES RULE BELGIUM?"
    path: /usr/local/bin/my-script
    permissions: '0755'

scripts-user:
  - /usr/local/bin/my-script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So you can see here that you can run arbitrary commands at first boot with &lt;code&gt;runcmd&lt;/code&gt;, and you can also run a custom Bash script with &lt;code&gt;scripts-user&lt;/code&gt; and &lt;code&gt;write-files&lt;/code&gt;. (See &lt;a href="https://saturncloud.io/blog/how-to-properly-use-runcmd-and-scriptsuser-in-cloudinit/" rel="noopener noreferrer"&gt;this writeup from SaturnCloud&lt;/a&gt; for more information).&lt;/p&gt;

&lt;p&gt;You might notice that the Terraform template definition is pretty close in structure to the one I used in my last article. That's intentional - I set up the last one with cloud-init, but didn't do much with it. This one actually provisions the VM with cloud-init. You can also use &lt;a href="https://www.ansible.com/" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; playbooks to provision a VM, and I might talk about that in a future post, but in my next post I'm going to talk about doing something &lt;em&gt;actually&lt;/em&gt; useful in my home infrastructure and setting up &lt;a href="https://plex.tv" rel="noopener noreferrer"&gt;Plex&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once again, we execute &lt;code&gt;terraform plan&lt;/code&gt;. The plan looks good, so we apply it with &lt;code&gt;terraform apply&lt;/code&gt;, wait a couple of minutes, and boom! We've got ourselves a VM with both cloud-init &lt;em&gt;and&lt;/em&gt; &lt;a href="https://pve.proxmox.com/wiki/Qemu-guest-agent" rel="noopener noreferrer"&gt;QEMU Guest Agent&lt;/a&gt;. Pretty cool! Next time I'll show you how to use &lt;a href="https://www.ansible.com/" rel="noopener noreferrer"&gt;Ansible&lt;/a&gt; playbooks to provision your VMs &lt;/p&gt;

</description>
      <category>terraform</category>
      <category>proxmox</category>
      <category>cloudinit</category>
      <category>devops</category>
    </item>
    <item>
      <title>Setting up the home lab: Terraform</title>
      <dc:creator>Aurelia Peters</dc:creator>
      <pubDate>Wed, 17 Jul 2024 16:11:28 +0000</pubDate>
      <link>https://dev.to/popefelix/setting-up-the-home-lab-terraform-44b3</link>
      <guid>https://dev.to/popefelix/setting-up-the-home-lab-terraform-44b3</guid>
      <description>&lt;p&gt;When I worked for CBS, I discovered &lt;a href="https://developer.hashicorp.com/terraform" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, which is tool that allows you do define infrastructure as code ("IaC"). I just recently purchased a home lab server (the details of how I have that set up will be discussed in a future article) and installed &lt;a href="https://proxmox.com/en/proxmox-virtual-environment/overview" rel="noopener noreferrer"&gt;Proxmox VE&lt;/a&gt; on it. I'd like to improve my DevOps skills, so I thought I'd play with Terraform for my home lab. Eventually I'd like to get my entire home infrastructure represented as Terraform templates.&lt;/p&gt;

&lt;p&gt;To get Terraform set up on my home lab server (which I call "The Beast" XD) I'll be following the excellent &lt;a href="https://tcude.net/using-terraform-with-proxmox/" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; given by &lt;a href="https://tcude.net/" rel="noopener noreferrer"&gt;Tanner Cude&lt;/a&gt;. I'll be reproducing a lot of it here just in case that site goes down. I'll also update the templates Tanner provides for the current version (&lt;code&gt;v3.0.1-rc3&lt;/code&gt;) of the &lt;a href="https://registry.terraform.io/providers/Telmate/proxmox/latest/docs" rel="noopener noreferrer"&gt;Telmate proxmox provider&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that I'll be doing the local machine portion on my desktop machine, which runs Ubuntu Linux. Tanner's instructions are for a MacBook, but things are pretty similar. &lt;/p&gt;

&lt;p&gt;To begin with, I've installed Terraform on my desktop machine following the &lt;a href="https://developer.hashicorp.com/terraform/downloads" rel="noopener noreferrer"&gt;installation instructions&lt;/a&gt; provided by Hashicorp.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update &amp;amp;&amp;amp; sudo apt install terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then SSH'd into my home lab server and set up a role for the Terraform worker to assume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pveum role add terraform-role -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt SDN.Use"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also do this via the web UI, but it's easier (IMO) to do via the command line.&lt;/p&gt;

&lt;p&gt;Next, and again on my home lab server, I'll create the &lt;code&gt;terraform&lt;/code&gt; user, associate it with the role I created above, and get an authentication token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pveum user add terraform@pve 
pveum aclmod / -user terraform@pve -role terraform-role
pveum user token add terraform@pve terraform-token --privsep=0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And I get a response similar to the below. If you're following along, make sure to save the access token (the UUID in the last line of the response); you won't be able to retrieve it later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌──────────────┬──────────────────────────────────────┐
│ key          │ value                                │
╞══════════════╪══════════════════════════════════════╡
│ full-tokenid │ terraform@pve!terraform-token        │
├──────────────┼──────────────────────────────────────┤
│ info         │ {"privsep":"0"}                      │
├──────────────┼──────────────────────────────────────┤
│ value        │ xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx │
└──────────────┴──────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that the basics are set up on Proxmox, it's time to go back to my local machine.&lt;/p&gt;

&lt;p&gt;I created a &lt;a href="https://github.com/PopeFelix/home_infra" rel="noopener noreferrer"&gt;Git repo&lt;/a&gt; for my Terraform code called &lt;code&gt;home_infra&lt;/code&gt; and added a &lt;code&gt;.gitignore&lt;/code&gt; file so that secrets like the token ID and access token are only stored locally. Later on I might store them as &lt;a href="https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions" rel="noopener noreferrer"&gt;repository secrets&lt;/a&gt;, but I'll have to see how that works with the Terraform command line. For now they're fine in a &lt;code&gt;.tfvars&lt;/code&gt; file, and if I want to work on my infrastructure from another machine, I can always SSH into my desktop from there or copy the file. &lt;/p&gt;

&lt;p&gt;Now I'm going to set up the public Terraform vars that I'll use when spinning up a VM. I'll write them to a file called &lt;code&gt;vars.tf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# TF vars for spinning up VMs

# Set your public SSH key here
variable "ssh_key" {
  default = "ssh-ed25519 &amp;lt;My SSH Public Key&amp;gt; aurelia@desktop"
}
#Establish which Proxmox host you'd like to spin a VM up on
variable "proxmox_host" {
    default = "thebeast" 
}
#Specify which template name you'd like to use
variable "template_name" {
    default = "ubuntu-2404-template2"
}
#Establish which nic you would like to utilize
variable "nic_name" {
    default = "vmbr0"
}

# I don't have VLANs set up
# #Establish the VLAN you'd like to use 
# variable "vlan_num" {
#     default = "place_vlan_number_here"
# }
#Provide the url of the host you would like the API to communicate on.
#It is safe to default to setting this as the URL for what you used
#as your `proxmox_host`, although they can be different
variable "api_url" {
    default = "https://thebeast.scurrilous.foo:8006/api2/json"
}
#Blank var for use by terraform.tfvars
variable "token_secret" {
}
#Blank var for use by terraform.tfvars
variable "token_id" {
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, I'll create &lt;code&gt;main.tf&lt;/code&gt; to define my infrastructure, and I'll put two test VMs in just for grins. Note how this references the variables in both &lt;code&gt;vars.tf&lt;/code&gt; and &lt;code&gt;terraform.tfvars&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    proxmox = {
      source = "telmate/proxmox"
      #latest version as of 16 July 2024
      version = "3.0.1-rc3"
    }
  }
}

provider "proxmox" {
  # References our vars.tf file to plug in the api_url 
  pm_api_url = var.api_url
  # References our secrets.tfvars file to plug in our token_id
  pm_api_token_id = var.token_id
  # References our secrets.tfvars to plug in our token_secret 
  pm_api_token_secret = var.token_secret
  # Default to `true` unless you have TLS working within your pve setup 
  pm_tls_insecure = true
}

resource "proxmox_vm_qemu" "cloudinit-test1" {
    name = "terraform-test-vm"
    desc = "A test for using terraform and cloudinit"

    # Node name has to be the same name as within the cluster
    # this might not include the FQDN
    target_node = var.proxmox_host

    # The template name to clone this vm from
    clone = var.template_name

    # Activate QEMU agent for this VM
    agent = 1

    os_type = "cloud-init"
    cores = 2
    sockets = 1
    vcpus = 0
    cpu = "host"
    memory = 2048
    scsihw = "virtio-scsi-single"

    # Setup the disk
    disks {
        ide {
            ide3 {
                cloudinit {
                    storage = "local-lvm"
                }
            }
        }
        virtio {
          virtio0 {
            disk {
              size = "32G"
              storage = "containers-and-vms"
              discard = true
              iothread        = true
              # Can't emulate SSDs in virtio
            }
          }
        }
    }

    network {
      model = "virtio"
      bridge = var.nic_name
    }

    # Setup the ip address using cloud-init.
    boot = "order=virtio0"
    # Keep in mind to use the CIDR notation for the ip.
    ipconfig0 = "ip=dhcp,ip6=dhcp"
    skip_ipv6 = true
    sshkeys = var.ssh_key
}
resource "proxmox_vm_qemu" "cloudinit-test2" {
    name = "terraform-test-vm-2"
    desc = "A test for using terraform and cloudinit"

    # Node name has to be the same name as within the cluster
    # this might not include the FQDN
    target_node = var.proxmox_host

    # The template name to clone this vm from
    clone = var.template_name

    # Activate QEMU agent for this VM
    agent = 1

    os_type = "cloud-init"
    cores = 2
    sockets = 1
    vcpus = 0
    cpu = "host"
    memory = 2048
    scsihw = "virtio-scsi-single"

    # Setup the disk
    disks {
        ide {
            ide3 {
                cloudinit {
                    storage = "local-lvm"
                }
            }
        }
        virtio {
          virtio0 {
            disk {
              size = "32G"
              storage = "containers-and-vms"
              discard = true
              iothread        = true
              # Can't emulate SSDs in virtio
            }
          }
        }
    }

    network {
      model = "virtio"
      bridge = var.nic_name
    }

    # Setup the ip address using cloud-init.
    boot = "order=virtio0"
    # Keep in mind to use the CIDR notation for the ip.
    ipconfig0 = "ip=dhcp,ip6=dhcp"
    skip_ipv6 = true

    sshkeys = var.ssh_key
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that I've got my main Terraform template, I can initialize Terraform and see if this works. Go-go &lt;code&gt;terraform init&lt;/code&gt;!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aurelia@desktop:~/work/home_infra$ terraform init
Initializing the backend...
Initializing provider plugins...
- Finding telmate/proxmox versions matching "3.0.1-rc3"...
- Installing telmate/proxmox v3.0.1-rc3...
- Installed telmate/proxmox v3.0.1-rc3 (self-signed, key ID A9EBBE091B35AFCE)
Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we'll run &lt;code&gt;terraform plan&lt;/code&gt; to see if we've got our template right:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aurelia@desktop:~/work/home_infra$ terraform plan -out plan.out
proxmox_vm_qemu.cloudinit-test2: Refreshing state... [id=thebeast/qemu/100]
proxmox_vm_qemu.cloudinit-test1: Refreshing state... [id=thebeast/qemu/101]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.cloudinit-test1 will be created
  + resource "proxmox_vm_qemu" "cloudinit-test1" {
      + additional_wait        = 5
      + agent                  = 1
      + automatic_reboot       = true
      + balloon                = 0
      + bios                   = "seabios"
      + boot                   = "order=virtio0"
      + bootdisk               = (known after apply)
      + clone                  = "ubuntu-2404-template2"
      + clone_wait             = 10
      + cores                  = 2
      + cpu                    = "host"
      + default_ipv4_address   = (known after apply)
      + default_ipv6_address   = (known after apply)
      + define_connection_info = true
      + desc                   = "A test for using terraform and cloudinit"
      + force_create           = false
      + full_clone             = true
      + hotplug                = "network,disk,usb"
      + id                     = (known after apply)
      + ipconfig0              = "ip=dhcp,ip6=dhcp"
      + kvm                    = true
      + linked_vmid            = (known after apply)
      + memory                 = 2048
      + name                   = "terraform-test-vm"
      + nameserver             = (known after apply)
      + onboot                 = false
      + os_type                = "cloud-init"
      + protection             = false
      + reboot_required        = (known after apply)
      + scsihw                 = "virtio-scsi-single"
      + searchdomain           = (known after apply)
      + skip_ipv4              = false
      + skip_ipv6              = true
      + sockets                = 1
      + ssh_host               = (known after apply)
      + ssh_port               = (known after apply)
      + sshkeys                = "ssh-ed25519 &amp;lt;My SSH Public Key&amp;gt; aurelia@desktop"
      + tablet                 = true
      + tags                   = (known after apply)
      + target_node            = "thebeast"
      + unused_disk            = (known after apply)
      + vcpus                  = 0
      + vm_state               = "running"
      + vmid                   = (known after apply)

      + disks {
          + ide {
              + ide3 {
                  + cloudinit {
                      + storage = "local-lvm"
                    }
                }
            }
          + virtio {
              + virtio0 {
                  + disk {
                      + backup               = true
                      + discard              = true
                      + format               = "raw"
                      + id                   = (known after apply)
                      + iops_r_burst         = 0
                      + iops_r_burst_length  = 0
                      + iops_r_concurrent    = 0
                      + iops_wr_burst        = 0
                      + iops_wr_burst_length = 0
                      + iops_wr_concurrent   = 0
                      + iothread             = true
                      + linked_disk_id       = (known after apply)
                      + mbps_r_burst         = 0
                      + mbps_r_concurrent    = 0
                      + mbps_wr_burst        = 0
                      + mbps_wr_concurrent   = 0
                      + size                 = "32G"
                      + storage              = "containers-and-vms"
                    }
                }
            }
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }

      + smbios (known after apply)
    }

  # proxmox_vm_qemu.cloudinit-test2 will be created
  + resource "proxmox_vm_qemu" "cloudinit-test2" {
      + additional_wait        = 5
      + agent                  = 1
      + automatic_reboot       = true
      + balloon                = 0
      + bios                   = "seabios"
      + boot                   = "order=virtio0"
      + bootdisk               = (known after apply)
      + clone                  = "ubuntu-2404-template2"
      + clone_wait             = 10
      + cores                  = 2
      + cpu                    = "host"
      + default_ipv4_address   = (known after apply)
      + default_ipv6_address   = (known after apply)
      + define_connection_info = true
      + desc                   = "A test for using terraform and cloudinit"
      + force_create           = false
      + full_clone             = true
      + hotplug                = "network,disk,usb"
      + id                     = (known after apply)
      + ipconfig0              = "ip=dhcp,ip6=dhcp"
      + kvm                    = true
      + linked_vmid            = (known after apply)
      + memory                 = 2048
      + name                   = "terraform-test-vm-2"
      + nameserver             = (known after apply)
      + onboot                 = false
      + os_type                = "cloud-init"
      + protection             = false
      + reboot_required        = (known after apply)
      + scsihw                 = "virtio-scsi-single"
      + searchdomain           = (known after apply)
      + skip_ipv4              = false
      + skip_ipv6              = true
      + sockets                = 1
      + ssh_host               = (known after apply)
      + ssh_port               = (known after apply)
      + sshkeys                = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJDjuXJsm20610XQGaGgsagEupVlfzYMorJXrNo1u7Gx took@oscar"
      + tablet                 = true
      + tags                   = (known after apply)
      + target_node            = "thebeast"
      + unused_disk            = (known after apply)
      + vcpus                  = 0
      + vm_state               = "running"
      + vmid                   = (known after apply)

      + disks {
          + ide {
              + ide3 {
                  + cloudinit {
                      + storage = "local-lvm"
                    }
                }
            }
          + virtio {
              + virtio0 {
                  + disk {
                      + backup               = true
                      + discard              = true
                      + format               = "raw"
                      + id                   = (known after apply)
                      + iops_r_burst         = 0
                      + iops_r_burst_length  = 0
                      + iops_r_concurrent    = 0
                      + iops_wr_burst        = 0
                      + iops_wr_burst_length = 0
                      + iops_wr_concurrent   = 0
                      + iothread             = true
                      + linked_disk_id       = (known after apply)
                      + mbps_r_burst         = 0
                      + mbps_r_concurrent    = 0
                      + mbps_wr_burst        = 0
                      + mbps_wr_concurrent   = 0
                      + size                 = "32G"
                      + storage              = "containers-and-vms"
                    }
                }
            }
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }

      + smbios (known after apply)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The plan looks good. Let's apply it!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aurelia@desktop:~/work/home_infra$ terraform apply plan.out
proxmox_vm_qemu.cloudinit-test1: Creating...
proxmox_vm_qemu.cloudinit-test2: Creating...
proxmox_vm_qemu.cloudinit-test1: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [10s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [20s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [30s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [40s elapsed]
proxmox_vm_qemu.cloudinit-test1: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit-test2: Still creating... [50s elapsed]
proxmox_vm_qemu.cloudinit-test2: Creation complete after 51s [id=thebeast/qemu/101]
proxmox_vm_qemu.cloudinit-test1: Creation complete after 55s [id=thebeast/qemu/100]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And sure enough, if I go into Proxmox, I can see both VMs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdcnvtbey5jpko2nq8dr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdcnvtbey5jpko2nq8dr.png" alt="Test VM #1" width="800" height="409"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrtkduc64b20x4yglotv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrtkduc64b20x4yglotv.png" alt="Test VM #2" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some notes on the process: I had to set up the template VM (&lt;code&gt;ubuntu-2404-template2&lt;/code&gt;) before I could clone any VMs from it. And this VM needs to be, with regard to Proxmox, a full VM and not a (Proxmox) template. Also, make sure the VMs you clone from this template VM via Terraform use the same block driver (i.e. &lt;code&gt;scsi&lt;/code&gt;, &lt;code&gt;ide&lt;/code&gt;, or &lt;code&gt;virtio&lt;/code&gt;) for the boot drive as the template VM. I initially created a template VM with a SCSI boot drive and tried to create a clone in Terraform using Virtio, and the cloned VM would just endlessly reboot with the message "No valid boot device" in the console. This may be my inexperience, however, and as always, YMMV.&lt;/p&gt;

&lt;p&gt;And there you have it! In my next article I'll show you how to provision your VMs using &lt;a href="https://cloud-init.io/" rel="noopener noreferrer"&gt;Cloud-Init&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>homelab</category>
    </item>
    <item>
      <title>Getting started with Kubernetes: Introduction</title>
      <dc:creator>Aurelia Peters</dc:creator>
      <pubDate>Thu, 30 May 2024 02:00:18 +0000</pubDate>
      <link>https://dev.to/popefelix/getting-started-with-kubernetes-introduction-27m6</link>
      <guid>https://dev.to/popefelix/getting-started-with-kubernetes-introduction-27m6</guid>
      <description>&lt;p&gt;I've been hearing a lot about Kubernetes of late, so I figured I ought to learn it.&lt;/p&gt;

&lt;p&gt;To begin with, I don't have a home lab as such, so I'll run everything through &lt;a href="https://minikube.sigs.k8s.io/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt;. I may switch things up later to get a more "realistic" k8s install, but minikube will be fine for a start. I'm also starting with the &lt;a href="https://kubernetes.io/docs/tutorials/" rel="noopener noreferrer"&gt;tutorials&lt;/a&gt; provided by the Kubernetes developers.&lt;/p&gt;

&lt;p&gt;I've set up a &lt;a href="https://github.com/PopeFelix/k8s-sample-projects/tree/main" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; where you can follow along with my progress. &lt;/p&gt;

&lt;p&gt;As the series goes on, feel free to leave any questions or comments you might have.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>The NgRx Component Store is neat!</title>
      <dc:creator>Aurelia Peters</dc:creator>
      <pubDate>Mon, 18 Dec 2023 21:10:41 +0000</pubDate>
      <link>https://dev.to/popefelix/the-ngrx-component-store-is-neat-20f1</link>
      <guid>https://dev.to/popefelix/the-ngrx-component-store-is-neat-20f1</guid>
      <description>&lt;p&gt;&lt;a href="https://stackblitz.com/edit/aurelia-ngrx-component-store-example?file=src%2Fmain.ts" rel="noopener noreferrer"&gt;See this project on Stackblitz&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lately I've been messing around with the &lt;a href="https://ngrx.io/guide/component-store" rel="noopener noreferrer"&gt;NgRx Component Store&lt;/a&gt;, which is a sort of stripped down version of their &lt;a href="https://ngrx.io/guide/store" rel="noopener noreferrer"&gt;Store&lt;/a&gt; software. The central idea is that your Angular application (or in my case, component) has a central data store class that manages the application / component state. What is "state" in this context? It's all the data that your application / component works with. The larger application that I'm currently working on uses worker processes to handle certain long-running jobs, so in my case, the state of my component is the execution status of those jobs, as reported by an HTTP backend.&lt;/p&gt;

&lt;p&gt;A job status record has the following fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;worker&lt;/code&gt; - the name of the worker that is recording this status entry. For example, if you had a worker that was transcoding a video stream, you might call it "transcode"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;job&lt;/code&gt; - The name of the job the worker is executing. Using the previous example of transcoding a video stream, you might call it "home-movie-1994-11-17.mpg".&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;jobType&lt;/code&gt; - This used to give more information about the job. Using the previous example, you might use "mpg_to_mp4"&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;jobStatus&lt;/code&gt; - This is used to record the status of the job. For the previous example, you might could put a completion percentage here. You could also use something like "Received", "In progress", "Complete" for jobs that aren't easily broken down into percent complete.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;error&lt;/code&gt; - This is an optional field, only populated when something's gone wrong with the job. It will contain a (hopefully useful) error message.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To display these job status records, let's put them in a table. &lt;br&gt;
Before I learned about NgRx, I would have gone about writing this component by first writing a &lt;a href="https://angular.io/guide/architecture-services" rel="noopener noreferrer"&gt;service class&lt;/a&gt; that mediated the interaction with the backend. I then would have generated a &lt;a href="https://angular.io/guide/architecture-components" rel="noopener noreferrer"&gt;component&lt;/a&gt; that interacted with the service class and fed data to the HTML template. If I wanted to be able to filter my results, I would have added form fields to the HTML template and added listeners on each form field's &lt;code&gt;onChange&lt;/code&gt; event. The component class would be fairly large in terms of lines of code with all these listeners and whatnot, and I'd have to do some jiggery-pokery with &lt;a href="https://rxjs.dev/" rel="noopener noreferrer"&gt;RxJs&lt;/a&gt; to get everything plumbed up properly. I would also have had to manage the &lt;a href="https://rxjs.dev/guide/subscription" rel="noopener noreferrer"&gt;Subscriptions&lt;/a&gt; for each form field to make sure they all got unsubscribed whenever the component was destroyed. And that's all stuff I've done before, but it's kind of a lot.&lt;/p&gt;

&lt;p&gt;With NgRx, however, this all becomes much easier. I create the service class as before, but then I create a &lt;a href="https://ngrx.io/guide/component-store" rel="noopener noreferrer"&gt;component store&lt;/a&gt;. This component store is going to manage &lt;em&gt;all&lt;/em&gt; of the data for my component. If I want the data laid out a certain way (in my case, I wanted a unique list of worker names, job IDs, job types, and job statuses), the code to do that will be in the component store. &lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://stackblitz.com/edit/aurelia-ngrx-component-store-example?embed=1&amp;amp;file=src%2Fworker-job-status%2Fworker-job-status.store.ts" width="100%" height="500"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The most important thing in this class is the definition of &lt;code&gt;WorkerJobStatusState&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export interface WorkerJobStatusState {
  statuses: WorkerJobStatus[];
  worker: string[];
  job: string[];
  jobType: string[];
  jobStatus: string[];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This encapsulates the &lt;em&gt;entire&lt;/em&gt; state of the component at any given time. Notice how it doesn't just have the list of status records. The values of each of the filters are part of the state as well. &lt;/p&gt;

&lt;p&gt;The real beauty of a component store, I think, comes in the selectors, which are pure functions providing a custom view of the component state. They can be composed together, as shown in the &lt;code&gt;selectedStatuses$&lt;/code&gt; selector, below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;readonly selectedStatuses$ = this.select(
    this.statuses$,
    this.selectedWorker$,
    this.selectedJob$,
    this.selectedJobType$,
    this.selectedJobStatus$,
    (
      statuses,
      selectedWorkers,
      selectedJobs,
      selectedJobTypes,
      selectedJobStatuses
    ) =&amp;gt;
      statuses.filter(
        (status) =&amp;gt;
          (selectedWorkers.length
            ? selectedWorkers.includes(status.worker)
            : true) &amp;amp;&amp;amp;
          (selectedJobs.length ? selectedJobs.includes(status.job) : true) &amp;amp;&amp;amp;
          (status.jobType &amp;amp;&amp;amp; selectedJobTypes.length
            ? selectedJobTypes.includes(status.jobType)
            : true) &amp;amp;&amp;amp;
          (selectedJobStatuses.length
            ? selectedJobStatuses.includes(status.jobStatus)
            : true)
      )
  );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;20 lines of code handles &lt;em&gt;everything&lt;/em&gt; around the status records. We take in the list of statuses provided by the backend service and the values of the filters and we return a list of matching statuses.&lt;/p&gt;

&lt;p&gt;Now let's look at the component class for the status list. Notice how the only external class it exchanges data with is the store, and notice how it doesn't do &lt;em&gt;any&lt;/em&gt; massaging / modification / filtering of the data the store provides. All of that is handled by the store.&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://stackblitz.com/edit/aurelia-ngrx-component-store-example?embed=1&amp;amp;file=src%2Fworker-job-status%2Fworker-job-status-list%2Fworker-job-status-list.component.html" width="100%" height="500"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We need to connect the filter form fields to the store, and we do that with these &lt;a href="https://ngrx.io/guide/component-store/write#updater-method" rel="noopener noreferrer"&gt;updater methods&lt;/a&gt;, which are methods provided by the component store allowing the caller to update the state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    this._workerJobStatusStore.workersSelected(
      this.selectedWorker.valueChanges
    );
    this._workerJobStatusStore.jobsSelected(this.selectedJob.valueChanges);
    this._workerJobStatusStore.jobTypesSelected(
      this.selectedJobType.valueChanges
    );
    this._workerJobStatusStore.jobStatusesSelected(
      this.selectedJobStatus.valueChanges
    );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that all we have to do in the way of plumbing is to pass the &lt;code&gt;valueChanges&lt;/code&gt; Observable from each of the filtering form fields to the updater functions. NgRx will handle everything else for us, including unsubscribing when the component that instantiated the store is destroyed.&lt;/p&gt;

&lt;p&gt;Now, if you look at the component code, you will notice that it doesn't do anything to connect the component store to the backend service. The component has no idea there even &lt;em&gt;is&lt;/em&gt; a backend service, and that's intentional. The worker job status list component is intended to be purely presentational. The data store is provided by &lt;a href="https://angular.io/guide/dependency-injection-overview" rel="noopener noreferrer"&gt;dependency injection&lt;/a&gt;, but how, you may ask, does the store get connected to the backend service? That gets handled in the container component:&lt;/p&gt;

&lt;p&gt;&lt;iframe src="https://stackblitz.com/edit/aurelia-ngrx-component-store-example?embed=1&amp;amp;file=src%2Fworker-job-status%2Fworker-job-status-container%2Fworker-job-status-container.component.ts" width="100%" height="500"&gt;
&lt;/iframe&gt;
 &lt;/p&gt;

&lt;p&gt;The reason I use a container component here is so that I have a single module that provides the data store and populates the state. If I want to add a status detail component later, one that maybe would provide the logs for each job, I could add another presentational component to do that, but the data store would still be provided by the container.&lt;/p&gt;

&lt;p&gt;OK, so I've added a lot of extra abstraction to what could have been a fairly simple component. Why? What good will this do me? I'll tell you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testability - by abstracting the data interaction into the data store class, I can test those methods without having to worry about setting up the component in my test harness or mocking the backend service&lt;/li&gt;
&lt;li&gt;Maintainability - By centralizing common functionality into a single class, I don't have to (for example) populate the data store for every component.&lt;/li&gt;
&lt;li&gt;Extensibility - It's easy to add a new component that uses the same data store.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And I think it makes the code more readable. The job status list component only handles displaying the list. The container component only handles populating the data store. The data store class only handles viewing / modifying the state. Sure, this was a fairly simple example, but for more complex components, this could be invaluable.&lt;/p&gt;

&lt;p&gt;So what do you think? Have you worked with component store / NgRx at all? What were your experiences like? &lt;/p&gt;

</description>
      <category>angular</category>
      <category>ngrx</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
