<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joe Block</title>
    <description>The latest articles on DEV Community by Joe Block (@unixorn).</description>
    <link>https://dev.to/unixorn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/unixorn"/>
    <language>en</language>
    <item>
      <title>Creating a Talos cluster with a Cilium CNI on Proxmox</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 04 Jan 2026 16:27:42 +0000</pubDate>
      <link>https://dev.to/unixorn/creating-a-talos-cluster-with-a-cilium-cni-on-proxmox-3k01</link>
      <guid>https://dev.to/unixorn/creating-a-talos-cluster-with-a-cilium-cni-on-proxmox-3k01</guid>
      <description>&lt;p&gt;I’ve been meaning to set up a talos cluster in my homelab for a while and set one up over the holiday break. Here’s how I did it.&lt;/p&gt;

&lt;p&gt;All the blogs and videos I looked at used the nginx ingress, which would be fine, except that the nginx ingress is a dead man walking and will be unsupported starting in March of 2026. No patches, no security updates, completely unsupported.&lt;/p&gt;

&lt;p&gt;Based on some advice on the hangops slack (Thanks, Brandon!) I wanted to use Cilium since it supports the Gateway API and can also do ARP announcements like MetallB.&lt;/p&gt;

&lt;p&gt;This is part one of a series I'm writing as I get my homelab cluster up and running.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talos Homelab Setup Series
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;01 Creating a Talos cluster with a Cilium CNI on Proxmox&lt;/li&gt;
&lt;li&gt;&lt;a href="https://unixorn.github.io/02-k8s-cilium-r53-and-cert-manager/" rel="noopener noreferrer"&gt;02 Add SSL to Kubernetes using Cilium, cert-manager and LetsEncrypt with domains hosted on Amazon Route 53&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Pre-requisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.proxmox.com/en/downloads" rel="noopener noreferrer"&gt;proxmox&lt;/a&gt; will make it easier to rebuild your cluster if you make a mistake, but these instructions will work with bare metal as well.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cilium&lt;/code&gt;, &lt;code&gt;kubectl&lt;/code&gt; &amp;amp; &lt;code&gt;helm&lt;/code&gt; cli tools. If you don't want to &lt;code&gt;brew install&lt;/code&gt; them or are not using a Mac, installation instructions are at &lt;a href="https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli" rel="noopener noreferrer"&gt;cilium.io&lt;/a&gt;, &lt;a href="https://helm.sh" rel="noopener noreferrer"&gt;helm.sh&lt;/a&gt; and &lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Software Versions
&lt;/h3&gt;

&lt;p&gt;Here are the versions of the software I used while writing this post. Later versions should work, but this is what these instructions were tested with.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Software&lt;/th&gt;
&lt;th&gt;Version&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;helm&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;4.0.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubectl&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.34&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;kubernetes&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.34.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;talos&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;1.11.5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Proxmox VM control plane node
&lt;/h2&gt;

&lt;p&gt;There are a ton of videos and blogs describing getting started with proxmox, so I'm not going to go into a lot of detail here. The TL;DR is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to the Talos &lt;a href="https://factory.talos.dev/" rel="noopener noreferrer"&gt;image factory&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create an image.

&lt;ol&gt;
&lt;li&gt;Start with the &lt;strong&gt;Cloud Server&lt;/strong&gt; option&lt;/li&gt;
&lt;li&gt;Select the latest Talos version&lt;/li&gt;
&lt;li&gt;Pick &lt;code&gt;nocloud&lt;/code&gt; from the cloud type screen since it explicitly mentions proxmox in the description&lt;/li&gt;
&lt;li&gt;Select your architecture&lt;/li&gt;
&lt;li&gt;You should now be on the &lt;strong&gt;System Extensions&lt;/strong&gt; page. Pick &lt;code&gt;qemu-guest-agent&lt;/code&gt; and &lt;code&gt;util-linux-tools&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Pick &lt;code&gt;auto&lt;/code&gt; for the bootloader.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;You should now see &lt;strong&gt;Schematic Ready&lt;/strong&gt;. Copy the ISO link&lt;/li&gt;

&lt;li&gt;Download the ISO into your Proxmox host's ISO storage&lt;/li&gt;

&lt;li&gt;Start a new VM with at least 4 CPUs, 4 GB of RAM and at least a 16GB drive. Make sure you enable the qemu agent. If you're planning to use this for real workloads later, you'll want to go bigger - have a look at Sidero's &lt;a href="https://docs.siderolabs.com/talos/v1.9/getting-started/system-requirements" rel="noopener noreferrer"&gt;System Requirements&lt;/a&gt; page. I went with 100G per their recommendations.&lt;/li&gt;

&lt;li&gt;Wait until the Talos Dashboard appears, then copy the new node's IP address&lt;/li&gt;

&lt;li&gt;On your DHCP server, find the IP from step 6, and set that as a static assignment. The server will reboot multiple times during installation, and if it changes IP you will have to update your &lt;code&gt;kubeconfig&lt;/code&gt; and &lt;code&gt;talosconfig&lt;/code&gt; files. If it changes IP addresses after you add a worker node, things will break so spare yourself future aggravation and give it a static assignment from the beginning.&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Once the Talos dashboard on the VM console shows that is in maintenance mode you can start to configure it. Again, make sure your DHCP is assigning it a static IP, that will save you aggravation later.&lt;/p&gt;

&lt;p&gt;We're going to make a single node cluster to simplify things since it's just for learning. You can add worker nodes later very easily once you want to put real workloads in the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the cluster control plane
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup some environment variables
&lt;/h3&gt;

&lt;p&gt;Set a &lt;code&gt;CONTROL_PLANE_IP&lt;/code&gt; environment variable to make copying commands from the post easier.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CLUSTER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sisyphus
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CONTROL_PLANE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.0.1.51
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Find out what disks are on the server
&lt;/h3&gt;

&lt;p&gt;The Talos installer needs to know what device is the node's hard drive, so use &lt;code&gt;talosctl&lt;/code&gt; to get the available disks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl get disks &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTROL_PLANE_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NODE        NAMESPACE   TYPE   ID      VERSION   SIZE     READ ONLY   TRANSPORT   ROTATIONAL   WWID   MODEL           SERIAL
10.0.1.51   runtime     Disk   loop0   2         73 MB    &lt;span class="nb"&gt;true
&lt;/span&gt;10.0.1.51   runtime     Disk   sda     2         17 GB    &lt;span class="nb"&gt;false       &lt;/span&gt;virtio      &lt;span class="nb"&gt;true                &lt;/span&gt;QEMU HARDDISK
10.0.1.51   runtime     Disk   sr0     2         317 MB   &lt;span class="nb"&gt;false       &lt;/span&gt;ata         &lt;span class="nb"&gt;true                &lt;/span&gt;QEMU DVD-ROM    QEMU_DVD-ROM_QM00003
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our node's hard drive is &lt;code&gt;sda&lt;/code&gt;, so&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DISK_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sda
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a cluster patch file
&lt;/h3&gt;

&lt;p&gt;We're going to use Cilium as the CNI and also have it replace &lt;code&gt;kube-proxy&lt;/code&gt;, so let's create the cluster with no CNI and disable &lt;code&gt;kube-proxy&lt;/code&gt;. To do that, we're going to create a patch file we can use when we generate the cluster's configuration with &lt;code&gt;talosctl&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cluster-patch.yaml&lt;/span&gt;
&lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cni&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
  &lt;span class="na"&gt;proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="c1"&gt;# Disable kube-proxy, Cilium will replace it too&lt;/span&gt;
    &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Generate the talos configuration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl gen config &lt;span class="nv"&gt;$CLUSTER_NAME&lt;/span&gt; &lt;span class="s2"&gt;"https://&lt;/span&gt;&lt;span class="nv"&gt;$CONTROL_PLANE_IP&lt;/span&gt;&lt;span class="s2"&gt;:6443"&lt;/span&gt; &lt;span class="nt"&gt;--install-disk&lt;/span&gt; &lt;span class="s2"&gt;"/dev/&lt;/span&gt;&lt;span class="nv"&gt;$DISK_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @cluster-patch.yaml
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TALOSCONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/talosconfig"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Initialize the cluster
&lt;/h3&gt;

&lt;p&gt;I like to test things in short-lived clusters so I don't have to worry about breaking things that my internal services depend on. I like naming nodes &lt;code&gt;clustername-role-number&lt;/code&gt; so that when I look at their proxmox console, it's nice and clear what cluster the node is and what its role is.&lt;/p&gt;

&lt;p&gt;Here's how to create a patch file that sets the node name when we apply our configuration. We also want to enable scheduling on the control plane node since we're setting up a single-node cluster.&lt;/p&gt;

&lt;p&gt;Create a &lt;code&gt;controlplane-1-patch.yaml&lt;/code&gt; that includes the hostname you want and sets &lt;code&gt;allowSchedulingOnControlPlanes&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# controlplane-1-hostname-patch.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sisyphus-cn-1&lt;/span&gt;
&lt;span class="na"&gt;cluster&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;allowSchedulingOnControlPlanes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the configuration to your control plane node to initialize the cluster with the merged &lt;code&gt;controlpane.yaml&lt;/code&gt; and &lt;code&gt;controlplane-1-patch.yaml&lt;/code&gt; files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl apply-config &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="nv"&gt;$CONTROL_PLANE_IP&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--file&lt;/span&gt; controlplane.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @controlplane-1-patch.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bootstrap etcd in the cluster
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;ONLY DO THIS ONCE!&lt;/em&gt; Wait until you see &lt;code&gt;etcd is waiting to join the cluster&lt;/code&gt; in the bottom portion of the dashboard. Depending on how fast your proxmox host is, this can take 5-10 minutes.&lt;/p&gt;

&lt;p&gt;There will be some error messages and it will look like nothing is happening, be patient it &lt;em&gt;will&lt;/em&gt; get back to ready. I think this is because we're configuring the cluster to not include a CNI so we can use Cilium, and/or because we disable kube-proxy because Cilium replaces that functionality too.&lt;/p&gt;

&lt;p&gt;The first time I stood a cluster up without CNI it took long enough that I thought I'd broken the configuration - it wasn't till I kicked it off and then went to cook dinner that I gave it enough time to settle down.&lt;/p&gt;

&lt;p&gt;So be patient, at least you only have to do this once per cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a kubeconfig file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl kubeconfig sisyphus-kubeconfig &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;CONTROL_PLANE_IP&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/sisyphus-kubeconfig"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Confirm that the cluster came up
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see something similar to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME              STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION   CONTAINER-RUNTIME
sisyphus-cn-1     Ready    control-plane   5m      v1.34.1   10.0.1.51     &amp;lt;none&amp;gt;        Talos (v1.11.5)   6.12.57-talos    containerd://2.1.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install cilium
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First install the CRDs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/standard/gateway.networking.k8s.io_grpcroutes.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v1.2.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml
&lt;span class="c"&gt;# Confirm gateway classes&lt;/span&gt;
kubectl get crd gatewayclasses.gateway.networking.k8s.io gateways.gateway.networking.k8s.io httproutes.gateway.networking.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Confirm the gateway classes are present
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get crd gatewayclasses.gateway.networking.k8s.io gateways.gateway.networking.k8s.io httproutes.gateway.networking.k8s.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME                                       CREATED AT
gatewayclasses.gateway.networking.k8s.io   2026-01-02T04:20:13Z
gateways.gateway.networking.k8s.io         2026-01-02T04:20:14Z
httproutes.gateway.networking.k8s.io       2026-01-02T04:20:15Z
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up the cilium helm repo
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add cilium https://helm.cilium.io/
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Install cilium
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cilium &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--version&lt;/span&gt; 1.18.1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ipam.mode&lt;span class="o"&gt;=&lt;/span&gt;kubernetes &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;kubeProxyReplacement&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;securityContext.capabilities.ciliumAgent&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{CHOWN,KILL,NET_ADMIN,NET_RAW,IPC_LOCK,SYS_ADMIN,SYS_RESOURCE,DAC_OVERRIDE,FOWNER,SETGID,SETUID}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;securityContext.capabilities.cleanCiliumState&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{NET_ADMIN,SYS_ADMIN,SYS_RESOURCE}"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cgroup.autoMount.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cgroup.hostRoot&lt;span class="o"&gt;=&lt;/span&gt;/sys/fs/cgroup &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;l2announcements.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;externalIPs.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set&lt;/span&gt; gatewayAPI.enabled&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;devices&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;e+ &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--helm-set&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;operator.replicas&lt;span class="o"&gt;=&lt;/span&gt;1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ℹ️  Using Cilium version 1.18.1
🔮 Auto-detected cluster name: sisyphus
🔮 Auto-detected kube-proxy has not been installed
ℹ️  Cilium will fully replace all functionalities of kube-proxy
I0101 21:25:52.110400   48637 warnings.go:110] &lt;span class="s2"&gt;"Warning: spec.SessionAffinity is ignored for headless services"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This took several minutes to come up on my sisyphus cluster controller with 2 cores and 4GB RAM&lt;/p&gt;

&lt;h3&gt;
  
  
  Confirm cilium status
&lt;/h3&gt;

&lt;p&gt;Confirm that Cilium is fully up and has no errors.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;❯ cilium status
    /¯¯&lt;span class="se"&gt;\&lt;/span&gt;
 /¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\ &lt;/span&gt;   Cilium:             OK
 &lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/    Operator:           OK
 /¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\ &lt;/span&gt;   Envoy DaemonSet:    OK
 &lt;span class="se"&gt;\_&lt;/span&gt;_/¯¯&lt;span class="se"&gt;\_&lt;/span&gt;_/    Hubble Relay:       disabled
    &lt;span class="se"&gt;\_&lt;/span&gt;_/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium-envoy             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 1
                       cilium-envoy             Running: 1
                       cilium-operator          Running: 1
                       clustermesh-apiserver
                       hubble-relay
Cluster Pods:          2/2 managed by Cilium
Helm chart version:    1.18.1
Image versions         cilium             quay.io/cilium/cilium:v1.18.1@sha256:65ab17c052d8758b2ad157ce766285e04173722df59bdee1ea6d5fda7149f0e9: 1
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.34.4-1754895458-68cffdfa568b6b226d70a7ef81fc65dda3b890bf@sha256:247e908700012f7ef56f75908f8c965215c26a27762f296068645eb55450bda2: 1
                       cilium-operator    quay.io/cilium/operator-generic:v1.18.1@sha256:97f4553afa443465bdfbc1cc4927c93f16ac5d78e4dd2706736e7395382201bc: 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Update talosconfig
&lt;/h3&gt;

&lt;p&gt;The beginning of your &lt;code&gt;talosconfig&lt;/code&gt; file will start with something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sisyphus&lt;/span&gt;
&lt;span class="na"&gt;contexts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sisyphus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.0.1.51&lt;/span&gt;
        &lt;span class="na"&gt;ca&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Update it to include a nodes entry&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sisyphus&lt;/span&gt;
&lt;span class="na"&gt;contexts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;sisyphus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.0.1.51&lt;/span&gt;
        &lt;span class="na"&gt;nodes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.0.1.51&lt;/span&gt;
        &lt;span class="na"&gt;ca&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will keep you from contantly having to specify &lt;code&gt;--nodes&lt;/code&gt; for your &lt;code&gt;talosctl&lt;/code&gt; commands.&lt;/p&gt;

&lt;h3&gt;
  
  
  Confirm that the cluster is showing healthy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Check external connectivity to cluster services
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First, test a &lt;code&gt;LoadBalancer&lt;/code&gt; service
&lt;/h3&gt;

&lt;p&gt;Make a &lt;code&gt;playground&lt;/code&gt; directory and put the following files in it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create the playground namespace
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 01-create-namespace.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespace&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kubernetes.io/metadata.name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create an IP Pool and Announcement Policy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 02-cilium-setup.yaml&lt;/span&gt;
&lt;span class="c1"&gt;# Create our list of IPs&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cilium.io/v2alpha1"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiliumLoadBalancerIPPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default-pool"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;blocks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;start&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.0.1.160"&lt;/span&gt; &lt;span class="c1"&gt;# Use IPs that are outside of your DHCP range but on&lt;/span&gt;
    &lt;span class="na"&gt;stop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.0.1.170"&lt;/span&gt;  &lt;span class="c1"&gt;# the same /24 as your talos VM.&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cilium.io/v2alpha1"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiliumL2AnnouncementPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;l2-announcement-policy&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;serviceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="c1"&gt;# On a multi-node cluster, you may not want the control-plane nodes&lt;/span&gt;
  &lt;span class="c1"&gt;# making arp announcements. Uncomment the nodeSelector stanza here&lt;/span&gt;
  &lt;span class="c1"&gt;# to disable that.&lt;/span&gt;
  &lt;span class="c1"&gt;# nodeSelector:&lt;/span&gt;
  &lt;span class="c1"&gt;#   matchExpressions:&lt;/span&gt;
  &lt;span class="c1"&gt;#     - key: node-role.kubernetes.io/control-plane&lt;/span&gt;
  &lt;span class="c1"&gt;#       operator: DoesNotExist&lt;/span&gt;
  &lt;span class="na"&gt;externalIPs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;loadBalancerIPs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="c1"&gt;# Different hardware will show different network device names.&lt;/span&gt;
  &lt;span class="c1"&gt;# This list of regexes (in Golang format) will find all the common&lt;/span&gt;
  &lt;span class="c1"&gt;# naming schemes I've seen for network devices so that Cilium can&lt;/span&gt;
  &lt;span class="c1"&gt;# find a network interface to make arp announcements.&lt;/span&gt;
  &lt;span class="na"&gt;interfaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;^eth+&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;^enp+&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;^ens+&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;^wlan+&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;^vmbr+&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;^wlp+&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create a deployment and service in the playground
&lt;/h4&gt;

&lt;p&gt;Talos is focused on giving you a default secure cluster out of the box, so you can't just use &lt;code&gt;kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0&lt;/code&gt; - talos requires you to configure the &lt;code&gt;securityContext&lt;/code&gt;. Here's an example deployment for nginx that runs as a non-root user and specifies the pod's resource requirements to satisfy the Pod Security Admission configuration that ships with talos. More info about that &lt;a href="https://docs.siderolabs.com/kubernetes-guides/security/pod-security" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# 03-playground-nginx.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-service&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Tells Cilium to manage this IP via LB IPAM&lt;/span&gt;
    &lt;span class="na"&gt;cilium.io/lb-ipam-pool-name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default-pool"&lt;/span&gt;
    &lt;span class="c1"&gt;# Optional: For L2/BGP to announce this IP&lt;/span&gt;
    &lt;span class="na"&gt;cilium.io/assign-internal-ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt; &lt;span class="c1"&gt;# Or use externalIPs:&lt;/span&gt;
    &lt;span class="c1"&gt;# If using externalIPs:&lt;/span&gt;
    &lt;span class="c1"&gt;# kubernetes.io/ingress.class: "cilium" # For Ingress&lt;/span&gt;
    &lt;span class="c1"&gt;# For a specific IP&lt;/span&gt;
    &lt;span class="c1"&gt;# lbipam.cilium.io/ips: "192.168.1.50" # The specific IP you want Cilium to answer on&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-pod&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-app&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-pod&lt;/span&gt;  &lt;span class="c1"&gt;# &amp;lt;-- must match pod labels exactly&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-pod&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Talos is very security oriented, so we have to set up the&lt;/span&gt;
      &lt;span class="c1"&gt;# security context explicitly&lt;/span&gt;
      &lt;span class="c1"&gt;# ---------- Pod‑level security settings ----------&lt;/span&gt;
      &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;runAsNonRoot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;runAsUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;101&lt;/span&gt; &lt;span class="c1"&gt;# non‑root UID that the image can run as&lt;/span&gt;
        &lt;span class="na"&gt;seccompProfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RuntimeDefault&lt;/span&gt;
        &lt;span class="c1"&gt;# Uncomment if you need a shared FS group for volume writes&lt;/span&gt;
        &lt;span class="c1"&gt;# fsGroup: 101&lt;/span&gt;
      &lt;span class="c1"&gt;# ---------- Containers ----------&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginx-playground-container&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nginxinc/nginx-unprivileged:latest&lt;/span&gt; &lt;span class="c1"&gt;# Alpine, but we force a non‑root UID&lt;/span&gt;
          &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
          &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
          &lt;span class="c1"&gt;# talos requires us to specify our resources instead of&lt;/span&gt;
          &lt;span class="c1"&gt;# letting k8s YOLO them&lt;/span&gt;
          &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;250m"&lt;/span&gt;
            &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;128Mi"&lt;/span&gt;
              &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;500m"&lt;/span&gt;
          &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;allowPrivilegeEscalation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
            &lt;span class="na"&gt;capabilities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;drop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ALL&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Deploy the playground
&lt;/h4&gt;

&lt;p&gt;If you pass a directory name to &lt;code&gt;kubectl&lt;/code&gt; with &lt;code&gt;-f&lt;/code&gt;, it will apply (or delete) all resources found in the &lt;code&gt;.yaml&lt;/code&gt; files in that directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; playground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  See what IP the playground is using
&lt;/h5&gt;

&lt;p&gt;It will almost certainly be the first IP address in your IP Pool, but confirm that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get service &lt;span class="nt"&gt;-n&lt;/span&gt; playground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;        AGE
playground-nginx-service   LoadBalancer   10.111.180.4   10.0.1.160    80:30597/TCP   1m15s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Confirm Connectivity
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://THE_EXTERNAL_IP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see the nginx default page! Don't delete the playground yet, we're going to use it to confirm that the Cilium Gateway API is working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test Cilium's Gateway API
&lt;/h2&gt;

&lt;p&gt;We configured Cilium to provide Gateway API services to the cluster when we installed it. Let's confirm that it's working correctly.&lt;/p&gt;

&lt;p&gt;Make a new &lt;code&gt;gateway-tests&lt;/code&gt; directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a Gateway
&lt;/h3&gt;

&lt;p&gt;Create &lt;code&gt;gateway-tests/01-create-gateway.yaml&lt;/code&gt;. For ease of testing we're going to configure the gateway to allow it to be used by services in any namespace. We're also assigning it a specific IP address so we can give it a stable FQDN. We don't want that changing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-gateway&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IPAddress&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.0.1.160&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt; &lt;span class="c1"&gt;# Case matters!&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;allowedRoutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;namespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;All&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Update the &lt;code&gt;playground-nginx-service&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Before we can add a &lt;code&gt;HTTPRoute&lt;/code&gt;, we're going to need to update the &lt;code&gt;playground-nginx-service&lt;/code&gt; so it isn't a &lt;code&gt;LoadBalancer&lt;/code&gt;, so create &lt;code&gt;gateway-tests/02-playground-service.yaml&lt;/code&gt; with the following contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-service&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-pod&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt; &lt;span class="c1"&gt;# This is what the service is listening on, and what will be routed to&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt; &lt;span class="c1"&gt;# Port the pods are listening on, don't route directly here!&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a HTTPRoute
&lt;/h3&gt;

&lt;p&gt;Create &lt;code&gt;gateway-tests/03-http-route.yaml&lt;/code&gt; to route all incoming requests for &lt;code&gt;ip-160.mydomain.com&lt;/code&gt; to our &lt;code&gt;playground-nginx-service&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPRoute&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-http-route&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;parentRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-gateway&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;hostnames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ip-160.mydomain.com"&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backendRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;playground-nginx-service&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the Gateway test resources
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; gateway-tests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Confirm it worked
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://ip-160.mydomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should display a "Welcome to nginx!" document that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a href="http://nginx.org/"&amp;gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a href="http://nginx.com/"&amp;gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you for using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congrats, you have a working talos cluster.&lt;/p&gt;

&lt;p&gt;I will cover setting up SSL by creating certificates with cert-manager, LetsEncrypt and Route 53 in a follow-up post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding a worker node
&lt;/h2&gt;

&lt;p&gt;I originally planned to make that another post, but it's only two steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create another VM
&lt;/h3&gt;

&lt;p&gt;The talos website recommends at least 2 CPUs and 2 GB RAM. I set mine to 50G of disk. You can use the same ISO you used when creating the control plane node.&lt;/p&gt;

&lt;p&gt;Make sure it has a static IP assignment in DHCP.&lt;/p&gt;

&lt;h3&gt;
  
  
  Add it to the cluster
&lt;/h3&gt;

&lt;p&gt;Create a patch file with the node name you want&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# worker-1-patch.yaml&lt;/span&gt;
&lt;span class="na"&gt;machine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clustername-worker-1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the worker node console gets to Maintenance, you can add it with a one line talosctl command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl apply-config &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; worker.yaml &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @worker-1-patch.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My proxmox cluster isn't on beefy hardware so it took a couple of minutes for the new node to join the cluster and start accepting workload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems I ran into
&lt;/h2&gt;

&lt;p&gt;When I was making this post, I ran into a couple of problems because I was redoing everything to run on a single node blog cluster and made some mistakes tidying things up for a post.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;code&gt;curl&lt;/code&gt; fails to connect
&lt;/h3&gt;

&lt;p&gt;While testing the &lt;code&gt;LoadBalancer&lt;/code&gt; service, even though the service &lt;code&gt;LoadBalancer&lt;/code&gt; shows that it has an external IP, when you test with &lt;code&gt;curl&lt;/code&gt;, it gives an error message similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl http://10.0.1.160/
curl: (7) Failed to connect to 10.0.1.160 port 80 after 16 ms: Could not connect to server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And when you check the pods&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; playground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;it shows the worker pod as pending, not crash loop backoff, not creating, just pending.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;NAMESPACE     NAME                                          READY   STATUS    RESTARTS      AGE
playground    pod/playground-nginx-app-6d7ddb5b95-lv82x     0/1     Pending   0             12s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I ran into this with the a blog cluster while writing this post, it was because I made the test cluster a single node cluster and forgot to set &lt;code&gt;allowSchedulingOnControlPlanes&lt;/code&gt; to &lt;code&gt;true&lt;/code&gt;, so there was no place to schedule the pods. You can fix this by applying an updated configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;talosctl apply-config &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;CONTROL_PLANE_IP&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--file&lt;/span&gt; controlplane.yaml &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--config-patch&lt;/span&gt; @controlplane-1-patch.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;code&gt;curl&lt;/code&gt; connects, but you get a 404
&lt;/h3&gt;

&lt;p&gt;While testing the gateway, &lt;code&gt;curl&lt;/code&gt; connects to the IP but you get a 404 error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-iv&lt;/span&gt; http://ip-160.mydomain.com/index.html
&lt;span class="k"&gt;*&lt;/span&gt; Host ip-160.mydomain.com:80 was resolved.
&lt;span class="k"&gt;*&lt;/span&gt; IPv6: &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;*&lt;/span&gt; IPv4: 10.0.1.160
&lt;span class="k"&gt;*&lt;/span&gt;   Trying 10.0.1.160:80...
&lt;span class="k"&gt;*&lt;/span&gt; Established connection to ip-160.mydomain.com &lt;span class="o"&gt;(&lt;/span&gt;10.0.1.160 port 80&lt;span class="o"&gt;)&lt;/span&gt; from 10.0.1.121 port 61150
&lt;span class="k"&gt;*&lt;/span&gt; using HTTP/1.x
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; GET /index.html HTTP/1.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Host: ip-160.mydomain.com
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; User-Agent: curl/8.17.0
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="k"&gt;*&lt;/span&gt; Request completely sent off
&amp;lt; HTTP/1.1 404 Not Found
&amp;lt; &lt;span class="nb"&gt;date&lt;/span&gt;: Sun, 04 Jan 2026 00:36:07 GMT
&amp;lt; server: envoy
&amp;lt; content-length: 0
&amp;lt;
&lt;span class="k"&gt;*&lt;/span&gt; Connection &lt;span class="c"&gt;#0 to host ip-160.mydomain.com:80 left intact&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I ran into this during testing because I forgot to update the gateway yaml file to allow all namespaces after I moved all the test manifests into the &lt;code&gt;playground&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;Talos reboots so fast that when I make settings changes to a single node cluster I always reboot the control plane node with &lt;code&gt;talosctl reboot -n $CONTROL_PLANE_IP&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  You changed cilium settings but they don't appear to have any effect
&lt;/h3&gt;

&lt;p&gt;Some changes to cilium settings require you to restart cilium pods to pick them up.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system rollout restart ds/cilium
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system rollout restart ds/cilium-envoy
kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system rollout restart deployment/cilium-operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Tips
&lt;/h2&gt;

&lt;p&gt;If you plan on standing up and tearing down VMs, copy the MAC of the first one. Go to the proxmox datacenter UI, select the VM, then select &lt;strong&gt;Hardware&lt;/strong&gt; and double click &lt;strong&gt;Network Device&lt;/strong&gt; for details) and set each replacement to that MAC. Your DHCP server uses a machine's MAC to determine if it should get a static assignment, so recycling the MAC keeps you from having to update DHCP each time you bring up a new VM.&lt;/p&gt;

&lt;p&gt;This is one of the few times it's a good idea to reuse a MAC - having two VMs or physical machines with the same MAC running simultaneously will cause problems with on your network.&lt;/p&gt;

</description>
      <category>cilium</category>
      <category>homelab</category>
      <category>k8s</category>
      <category>talos</category>
    </item>
    <item>
      <title>2023 Hacktoberfest work</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 12 Nov 2023 16:12:26 +0000</pubDate>
      <link>https://dev.to/unixorn/2023-hacktoberfest-work-n26</link>
      <guid>https://dev.to/unixorn/2023-hacktoberfest-work-n26</guid>
      <description>&lt;h3&gt;
  
  
  Intro
&lt;/h3&gt;

&lt;p&gt;I'm an SRE living in Denver.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hacktoberfest 2023
&lt;/h3&gt;

&lt;p&gt;In total, this year I had 29 PRs merged during this year's Hacktoberfest as a contributor. I reviewed and merged 17 PRs by other contributors on projects I maintain.&lt;/p&gt;

&lt;h4&gt;
  
  
  Maintenance work
&lt;/h4&gt;

&lt;p&gt;This year I did maintenance work on &lt;a href="https://github.com/unixorn/awesome-zsh-plugins/" rel="noopener noreferrer"&gt;awesome-zsh-plugins&lt;/a&gt;, the &lt;a href="https://github.com/unixorn/sysadmin-reading-list/" rel="noopener noreferrer"&gt;sysadmin-reading-list&lt;/a&gt;, the [zsh-quickstart-kit](&lt;a href="https://github.com/unixorn/zsh-quickstart-kit" rel="noopener noreferrer"&gt;https://github.com/unixorn/zsh-quickstart-kit&lt;/a&gt; and &lt;a href="https://github.com/unixorn/git-extra-commands/" rel="noopener noreferrer"&gt;git-extra-commands&lt;/a&gt; where I reviewed &amp;amp; merged 17 contributor PRs&lt;/p&gt;

&lt;h4&gt;
  
  
  New open source contributions
&lt;/h4&gt;

&lt;p&gt;I also wrote &lt;a href="https://github.com/unixorn/prometheus-moosefs-tricorder" rel="noopener noreferrer"&gt;prometheus-moosefs-tricorder&lt;/a&gt;, a prometheus exporter for &lt;a href="https://moosefs.com" rel="noopener noreferrer"&gt;moosefs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And finally, I hit 3260 consecutive days of GitHub contributions during this Hacktoberfest. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://git.io/streak-stats" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fstreak-stats.demolab.com%3Fuser%3Dunixorn%26theme%3Ddark" alt="GitHub Streak" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There's a bug in the streak detection where it occasional trims 12-18 months off of my streak graphic.&lt;/p&gt;

</description>
      <category>hacktoberfest23</category>
    </item>
    <item>
      <title>Fix Securifi Peanut issue with zigbee2mqtt</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Fri, 07 Oct 2022 07:20:45 +0000</pubDate>
      <link>https://dev.to/unixorn/fix-securifi-peanut-issue-with-zigbee2mqtt-nlc</link>
      <guid>https://dev.to/unixorn/fix-securifi-peanut-issue-with-zigbee2mqtt-nlc</guid>
      <description>&lt;p&gt;Securifi Peanut plugs have issues with zigbee2mqtt.&lt;/p&gt;

&lt;p&gt;I have several Securifi &lt;a href="https://smile.amazon.com/gp/product/B00TC9NC82" rel="noopener noreferrer"&gt;Peanut&lt;/a&gt; Zigbee switches. Overall, they’re nice little smart plugs and make good Zigbee routers to strengthen your Zigbee mesh, but they have one annoying issue - &lt;a href="https://zigbee2mqtt.io" rel="noopener noreferrer"&gt;zigbee2mqtt&lt;/a&gt; doesn’t recognize them perfectly, though there’s a simple fix that I’m going to document here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Peanut Smart Plugs to zigbee2mqtt
&lt;/h2&gt;

&lt;p&gt;The Peanut Smart Plug does not provide a &lt;code&gt;modelId&lt;/code&gt; in its database entry, so &lt;code&gt;zigbee2mqtt&lt;/code&gt; can't identify it to know how to handle it. Fortunately, it's an easy fix, though you'll have to do it every time you add a new Peanut plug.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pair your new Peanut(s) to your &lt;a href="https://zigbee2mqtt.io" rel="noopener noreferrer"&gt;zigbee2mqtt&lt;/a&gt; instance&lt;/li&gt;
&lt;li&gt;Once you've added all the peanut plugs, &lt;em&gt;stop&lt;/em&gt; &lt;code&gt;zigbee2mqtt&lt;/code&gt;. We're going to need to do some surgery on its &lt;code&gt;database.db&lt;/code&gt; file and that can't be done with the service running.&lt;/li&gt;
&lt;li&gt;Backup your &lt;code&gt;database.db&lt;/code&gt; file. If you mess up the edit, you'll want to be able to revert and try again easily.&lt;/li&gt;
&lt;li&gt;Edit your &lt;code&gt;database.db&lt;/code&gt; file. Add a &lt;code&gt;"modelId":"PP-WHT-US"&lt;/code&gt; to each of your Peanut entries. For example, change &lt;code&gt;""manufId":4098,&lt;/code&gt; to &lt;code&gt;"manufId":4098,"modelId":"PP-WHT-US",&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Once you've finished editing &lt;code&gt;database.db&lt;/code&gt;, restart the &lt;code&gt;zigbee2mqtt&lt;/code&gt; service.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should now see proper entries with capabilities in &lt;code&gt;zigbee2mqtt&lt;/code&gt; and be able to turn the switches on and off, both from &lt;code&gt;zigbee2mqtt&lt;/code&gt; and from Home Assistant. Now would be a good time to go to &lt;code&gt;zigbee2mqtt&lt;/code&gt;'s OTA tab and check if your Peanut plug(s) have any firmware updates.&lt;/p&gt;

</description>
      <category>homeassistant</category>
      <category>zigbee</category>
      <category>zigbee2mqtt</category>
      <category>iot</category>
    </item>
    <item>
      <title>AWS IAM Self Tagging EC2 Instances</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 13 Jun 2021 21:21:03 +0000</pubDate>
      <link>https://dev.to/unixorn/aws-iam-self-tagging-ec2-instances-410n</link>
      <guid>https://dev.to/unixorn/aws-iam-self-tagging-ec2-instances-410n</guid>
      <description>&lt;p&gt;For a variety of reasons, I needed to enable some EC2 instances to write/update a single EC2 tag, but the instaces needed to only be able to tag themselves.&lt;/p&gt;

&lt;p&gt;This was more annoying than I expected, so I'm documenting the IAM policy here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"ec2:DeleteTags"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"ec2:CreateTags"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="s2"&gt;"ec2:DescribeInstances"&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="nl"&gt;"aws:ARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"${ec2:SourceInstanceARN}"&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"ForAllValues:StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                  &lt;/span&gt;&lt;span class="nl"&gt;"aws:TagKeys"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"THAT_ONE_ALLOWED_TAG"&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Some notes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The AWS IAM editor in the webui will complain about SourceInstanceARN. Ignore it and click next anyway.&lt;/li&gt;
&lt;li&gt;Then it will complain that the policy doesn't add any permissions. It lies. Ignore it and save the policy.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can attach this policy to an IAM role and the instances will then be able to tag themselves, but only with the &lt;code&gt;THAT_ONE_ALLOWED_TAG&lt;/code&gt; tag.&lt;/p&gt;

</description>
      <category>iam</category>
      <category>ec2</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building Multi Architecture Docker Images with buildx</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 13 Jun 2021 15:37:07 +0000</pubDate>
      <link>https://dev.to/unixorn/building-multi-architecture-docker-images-with-buildx-180</link>
      <guid>https://dev.to/unixorn/building-multi-architecture-docker-images-with-buildx-180</guid>
      <description>&lt;p&gt;I've got a mix of architectures in my basement cluster - some Odroid HC2s that are arm7, some Raspberry Pi 4s that are arm64, and am soon going to add an Intel node as well. It's more hassle than it's worth to have to specify different images for the different architectures. I already build my own copies of images, so I decided to start building all my images as multiarchitecture images.&lt;/p&gt;

&lt;p&gt;This turned out to be a lot easier than I was expecting - recent stable builds (2.0.4.0 (33772) or higher) of Docker Desktop can build for other architectures by running virtual machines in QEMU, so I can do the whole build on my MacBook Pro instead of baking each architecture separately and stitching them together with a manifest file.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install the latest Docker Desktop for macOS&lt;/li&gt;
&lt;li&gt;Enable experimental mode either by setting &lt;code&gt;DOCKER_CLI_EXPERIMENTAL=enabled&lt;/code&gt; in your environment or by adding &lt;code&gt;"experimental" : "enabled"&lt;/code&gt; to &lt;code&gt;~/.docker/config.json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Do &lt;code&gt;docker buildx ls&lt;/code&gt; to see the current builders&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker buildx create --name multiarch&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker buildx use multiarch&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;docker buildx inspect --bootstrap&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now you're ready to build. Go into one of your docker projects, then do &lt;code&gt;docker buildx --platform linux/amd64,linux/arm64,linux/arm/v7 -t username/demo --push .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If you're not ready to push your image to docker hub, do &lt;code&gt;--load&lt;/code&gt; instead of &lt;code&gt;--push&lt;/code&gt; to have it build the image and copy it out of the buildx system and into your local docker.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>arm</category>
      <category>kubernetes</category>
      <category>containers</category>
    </item>
    <item>
      <title>Use a Raspberry Pi as a print server</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 06 Jun 2021 20:09:51 +0000</pubDate>
      <link>https://dev.to/unixorn/use-a-raspberry-pi-as-a-print-server-2hdb</link>
      <guid>https://dev.to/unixorn/use-a-raspberry-pi-as-a-print-server-2hdb</guid>
      <description>&lt;p&gt;I have an old HP 4050N. For a variety of reasons, I want to have it behind a print server instead of having our laptops print directly to it. Here's how I set that up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A Raspberry Pi (or honestly any Linux box) running &lt;code&gt;docker&lt;/code&gt;. I like the Raspberry Pi and Odroid HC2 for this sort of thing because they have very low power consumption.&lt;/li&gt;
&lt;li&gt;A printer that is supported by &lt;a href="https://www.cups.org/" rel="noopener noreferrer"&gt;CUPS&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;There are several docker images out there you can use, I made one (the source is on github at &lt;a href="https://github.com/unixorn/docker-cupsd" rel="noopener noreferrer"&gt;unixorn/docker-cupsd&lt;/a&gt;) because I wanted one that was multi-architecture - mine has AMD64, ARM7 and ARM64 all baked into the same image so you don't have to change the image label based on what system you're running it on. It works fine on Raspberry Pi, Odroids and Intel servers.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://hub.docker.com/repository/docker/unixorn/cupsd" rel="noopener noreferrer"&gt;unixorn/cupsd&lt;/a&gt; image is a bit on the large side because I crammed a lot of printer drivers into it, you may want to look for docker images that only support single printer families.&lt;/p&gt;

&lt;p&gt;We're going to store &lt;code&gt;printers.conf&lt;/code&gt; in a directory outside the container so that we don't lose our printer configuration every time we upgrade our container.&lt;/p&gt;

&lt;p&gt;I run the &lt;code&gt;cupsd&lt;/code&gt; server on an Odroid HC2 because I have &lt;code&gt;/var/lib/docker&lt;/code&gt; on the 2TB drive attached to it. I could have put it on one of the Raspberry Pis in my cluster, but didn't want it spooling print jobs and causing excessive wear on a rPi's microSD card.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make a directory to store your printer configuration. We'll use &lt;code&gt;/docker/cupsd/etc&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;export CUPSD_DIR='/docker/cupsd/etc'&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;touch $CUPSD_DIR/printers.conf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Run the &lt;code&gt;cupsd&lt;/code&gt; server with
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--restart&lt;/span&gt; unless-stopped &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 631:631 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--privileged&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /var/run/dbus:/var/run/dbus &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /dev/bus/usb:/dev/bus/usb &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CUPSD_DIR&lt;/span&gt;&lt;span class="s2"&gt;/printers.conf:/etc/cups/printers.conf"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  unixorn/cupsd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can now connect to &lt;code&gt;http://SERVER:631&lt;/code&gt; and add printers using the web UI.&lt;/p&gt;

&lt;p&gt;When adding the printers to your Mac, select Internet Printing Protocol and put in the IP or DNS name of your print server machine.&lt;/p&gt;

&lt;p&gt;The queues are &lt;code&gt;printers/printername&lt;/code&gt;, not &lt;code&gt;printername&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>raspberrypi</category>
    </item>
    <item>
      <title>Home Assistant Printer Power Management</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 06 Jun 2021 19:54:13 +0000</pubDate>
      <link>https://dev.to/unixorn/home-assistant-printer-power-management-40bg</link>
      <guid>https://dev.to/unixorn/home-assistant-printer-power-management-40bg</guid>
      <description>&lt;p&gt;I've got an old HP laser printer in my basement. We barely print 10 pages a month between the two of us, so we only turn it on when we're going to print. That's a hassle though, because inevitably we forget to shut it off sometimes and it stays on overnight or even for days, and while it has a powersave mode, the 4050N is so old that even that burns a good amount of power.&lt;/p&gt;

&lt;p&gt;Enter Home Assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;h3&gt;
  
  
  You have HA configured to connect to a MQTT server
&lt;/h3&gt;

&lt;p&gt;The watcher script and associated tooling all presume that we can send messages to a MQTT topic that HA is watching.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your printer is connected to a cupsd server running in a container
&lt;/h3&gt;

&lt;p&gt;Your computers should be configured to print to the cupsd server instead of directly to the printer.&lt;/p&gt;

&lt;p&gt;I run &lt;code&gt;cupsd&lt;/code&gt; in a container on one of my Odroids. I could run it on the same Odroid HC2 that I run Home Assistant (HA) on, but there's no compelling reason to do so and I'm reserving that node for strictly HA containers like Home Assistant itself and my MQTT server. I picked an Odroid because it has a SATA drive attached and my &lt;code&gt;/var/lib/docker&lt;/code&gt; is on the hard drive and not an microSD card - there's no reason you can't run it on a Raspberry Pi other than to prevent excessive wear on the microSD card.&lt;/p&gt;

&lt;p&gt;You could modify the watcher script if you're running &lt;code&gt;cupsd&lt;/code&gt; directly instead of in a container, but I run my &lt;code&gt;cupsd&lt;/code&gt; in a container, so that's what the script is designed for.&lt;/p&gt;

&lt;p&gt;There are plenty of articles about setting up &lt;code&gt;cupsd&lt;/code&gt;, but I wrote about setting up &lt;code&gt;cupsd&lt;/code&gt; &lt;a href="https://unixorn.github.io/post/cupsd-setup/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your printer is plugged into an outlet controlled by HA
&lt;/h3&gt;

&lt;p&gt;We want to be able to toggle the power from Home Assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Printer Power Control
&lt;/h2&gt;

&lt;h3&gt;
  
  
  mosquitto helper script
&lt;/h3&gt;

&lt;p&gt;I don't like to install anything more on my docker hosts than I absolutely have to, so instead of installing mosquitto directly on the printserver machine, I run &lt;code&gt;mosquitto_pub&lt;/code&gt; inside a container with the following &lt;code&gt;c-mosquitto_pub&lt;/code&gt; helper script. You can download it from github &lt;a href="https://github.com/unixorn/blog-scripts/blob/master/cupsd-hass/c-mosquitto_pub" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Put this in &lt;code&gt;/usr/local/bin&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Use docker to run mosquitto_pub&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Copyright 2019, Joe Block &amp;lt;jpb@unixorn.net&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Licensed under the Apache License, Version 2.0 (the "License");&lt;/span&gt;
&lt;span class="c"&gt;# you may not use this file except in compliance with the License.&lt;/span&gt;
&lt;span class="c"&gt;# You may obtain a copy of the License at&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;#     http://www.apache.org/licenses/LICENSE-2.0&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Unless required by applicable law or agreed to in writing, software&lt;/span&gt;
&lt;span class="c"&gt;# distributed under the License is distributed on an "AS IS" BASIS,&lt;/span&gt;
&lt;span class="c"&gt;# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.&lt;/span&gt;
&lt;span class="c"&gt;# See the License for the specific language governing permissions and&lt;/span&gt;
&lt;span class="c"&gt;# limitations under the License.&lt;/span&gt;

&lt;span class="nb"&gt;exec &lt;/span&gt;docker run &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; eclipse-mosquitto mosquitto_pub &lt;span class="nv"&gt;$@&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  cupsd Watcher
&lt;/h3&gt;

&lt;p&gt;Once I had cupsd configured to share the printer (as Franklin), I wrote a quick script that checks the print queue to see if it is empty or not. If there are jobs in the queue, it writes &lt;strong&gt;ON&lt;/strong&gt; to an MQTT topic, &lt;code&gt;hass/printers/franklin&lt;/code&gt;. If the queue is empty, it writes &lt;strong&gt;OFF&lt;/strong&gt;. The examples here all assume your printer is named Franklin, replace Franklin with your printer's name.&lt;/p&gt;

&lt;p&gt;Actually, I lied. When there are jobs, it writes &lt;strong&gt;OFF&lt;/strong&gt; and &lt;em&gt;then&lt;/em&gt; &lt;strong&gt;ON&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Why? Because I don't want HA to switch the printer off immediately once the queue drains - the printer has enough RAM that there may still be several pages left to print when it has accepted all of the job from the server.&lt;/p&gt;

&lt;p&gt;Instead, I've configured HA to restart a timer every time it sees the MQTT topic &lt;code&gt;hass/printers/franklin&lt;/code&gt; switch from &lt;strong&gt;OFF&lt;/strong&gt; to &lt;strong&gt;ON&lt;/strong&gt;, and only turn the printer off after the queue has been empty for five continuous minutes.&lt;/p&gt;

&lt;p&gt;Here's the &lt;code&gt;ha-check-for-print-jobs&lt;/code&gt; script source - you can download it from github &lt;a href="https://github.com/unixorn/blog-scripts/blob/master/cupsd-hass/ha-check-for-print-jobs" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Put the script in &lt;code&gt;/usr/local/bin&lt;/code&gt; on the same server you're running the cupsd container on - it is designed to run a tool inside that container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# ha-check-for-print-jobs&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Check if there are print jobs on $PRINT_Q. If there are, write&lt;/span&gt;
&lt;span class="c"&gt;# MQTT messages to a watched topic so HA knows to turn on the power&lt;/span&gt;
&lt;span class="c"&gt;# to the printer.&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Copyright 2021, Joe Block &amp;lt;jpb@unixorn.net&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# License: Apache 2&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; pipefail

&lt;span class="c"&gt;# Make all these overridable easily in your cron setup&lt;/span&gt;
&lt;span class="nv"&gt;PRINT_Q&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PRINT_Q&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'Franklin'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;CONTAINER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONTAINER&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'cupsd-server'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;MQTT_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MQTT_HOST&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'mqtt.example.com'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;MQTT_TOPIC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MQTT_TOPIC&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'hass/printers/franklin'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# We are run out of cron every minute, but I don't want it to take an&lt;/span&gt;
&lt;span class="c"&gt;# entire minute to turn on the power because I'm impatient and the printer&lt;/span&gt;
&lt;span class="c"&gt;# takes a bit to start up. When we print and walk downstairs, I want it&lt;/span&gt;
&lt;span class="c"&gt;# to have already started printing by the time I get there. If I was&lt;/span&gt;
&lt;span class="c"&gt;# patient, I wouldn't have bothered to write this tool :-)&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# So, when we get run by cron, we check the queue CHECK_COUNT times, with&lt;/span&gt;
&lt;span class="c"&gt;# CHECK_DELAY seconds between each run.&lt;/span&gt;
&lt;span class="nv"&gt;CHECK_COUNT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHECK_COUNT&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'11'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;span class="nv"&gt;CHECK_DELAY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CHECK_DELAY&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'5'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PATH&lt;/span&gt;&lt;span class="s2"&gt;:/usr/local/bin:/usr/local/sbin"&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /tmp/printerdebug &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nv"&gt;DEBUG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'true'&lt;/span&gt;
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Only spam syslog when DEBUG is set&lt;/span&gt;
debugout&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEBUG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

validate-settings&lt;span class="o"&gt;(){&lt;/span&gt;
  debugout &lt;span class="s2"&gt;"CONTAINER: &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  debugout &lt;span class="s2"&gt;"PRINT_Q: &lt;/span&gt;&lt;span class="nv"&gt;$PRINT_Q&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  debugout &lt;span class="s2"&gt;"MQTT_HOST: &lt;/span&gt;&lt;span class="nv"&gt;$MQTT_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  debugout &lt;span class="s2"&gt;"MQTT_TOPIC: &lt;/span&gt;&lt;span class="nv"&gt;$MQTT_TOPIC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

  &lt;span class="nv"&gt;valid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'true'&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"CONTAINER is unset"&lt;/span&gt;
    &lt;span class="nv"&gt;valid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'false'&lt;/span&gt;
  &lt;span class="k"&gt;fi
  if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PRINT_Q&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"PRINT_Q is unset"&lt;/span&gt;
    &lt;span class="nv"&gt;valid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'false'&lt;/span&gt;
  &lt;span class="k"&gt;fi
  if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"MQTT_HOST is unset"&lt;/span&gt;
    &lt;span class="nv"&gt;valid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'false'&lt;/span&gt;
  &lt;span class="k"&gt;fi
  if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_TOPIC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"MQTT_TOPIC is unset"&lt;/span&gt;
    &lt;span class="nv"&gt;valid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'false'&lt;/span&gt;
  &lt;span class="k"&gt;fi
  if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$valid&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"false"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Configure your settings."&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

print-job-checker&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  &lt;span class="nv"&gt;printjobs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; lpq &lt;span class="nt"&gt;-P&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PRINT_Q&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'no entries'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$printjobs&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s1"&gt;'1'&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;debugout &lt;span class="s2"&gt;"No jobs in print queue, notifying HA"&lt;/span&gt;
    c-mosquitto_pub &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_TOPIC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; OFF
  &lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"jobs found in print queue, notifying HA"&lt;/span&gt;

    docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; lpq &lt;span class="nt"&gt;-P&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PRINT_Q&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# Set the status off, then back to on, so that the HA timer restarts&lt;/span&gt;
    &lt;span class="c"&gt;# and HA doesn't turn off the printer in the middle of a job&lt;/span&gt;
    c-mosquitto_pub &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_TOPIC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; OFF
    c-mosquitto_pub &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_HOST&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$MQTT_TOPIC&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; ON
    debugout &lt;span class="s2"&gt;"re-enabling printer &lt;/span&gt;&lt;span class="nv"&gt;$PRINT_Q&lt;/span&gt;&lt;span class="s2"&gt;..."&lt;/span&gt;
    docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; lpadmin &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PRINT_Q&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; printer-error-policy&lt;span class="o"&gt;=&lt;/span&gt;retry-current-job
  &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

validate-settings

&lt;span class="c"&gt;# We run the print-job-checker every 5 seconds to minimize the UI delay on the&lt;/span&gt;
&lt;span class="c"&gt;# macOs end&lt;/span&gt;

&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq&lt;/span&gt; &lt;span class="nv"&gt;$CHECK_COUNT&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;do
  &lt;/span&gt;print-job-checker
  debugout &lt;span class="s2"&gt;"waiting..."&lt;/span&gt;
  &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="nv"&gt;$CHECK_DELAY&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Home Assistant Setup
&lt;/h3&gt;

&lt;p&gt;I configured my HA to watch a MQTT topic as a binary sensor. You can download this snippet &lt;a href="https://github.com/unixorn/blog-scripts/blob/master/cupsd-hass/printer-binary-sensor.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;binary_sensor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mqtt&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Franklin&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Print&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Queue"&lt;/span&gt;
    &lt;span class="na"&gt;payload_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ON"&lt;/span&gt;
    &lt;span class="na"&gt;state_topic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hass/printers/franklin"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when the watcher writes &lt;strong&gt;ON&lt;/strong&gt; and &lt;strong&gt;OFF&lt;/strong&gt; to the &lt;code&gt;hass/printers/franklin&lt;/code&gt; queue, that binary sensor will change status and we can trigger an automation for it.&lt;/p&gt;

&lt;p&gt;This automation will turn the printer power on every time the binary sensor is turned on, and turn it off five minutes after the last time the binary sensor switched from &lt;strong&gt;ON&lt;/strong&gt; to &lt;strong&gt;OFF&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The outlet my printer is plugged into is controlled by HA and rather unimaginatively named &lt;code&gt;switch.printerpower&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Add this stanza to your automations.yaml file. Download it &lt;a href="https://github.com/unixorn/blog-scripts/blob/master/cupsd-hass/printer-automations.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Franklin power is controlled by MQTT&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alias&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Turn&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;on&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Franklin&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;when&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;there&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;are&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;jobs&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;queue'&lt;/span&gt;
    &lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;state&lt;/span&gt;
      &lt;span class="na"&gt;entity_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;binary_sensor.franklin_print_queue&lt;/span&gt;
      &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;on'&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;homeassistant.turn_on&lt;/span&gt;
      &lt;span class="na"&gt;entity_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;switch.printerpower&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;alias&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Turn&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;off&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;printer&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;5&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;minutes&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;after&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;print&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;queue&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;drains'&lt;/span&gt;
    &lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;state&lt;/span&gt;
      &lt;span class="na"&gt;entity_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;binary_sensor.franklin_print_queue&lt;/span&gt;
      &lt;span class="na"&gt;to&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;off'&lt;/span&gt;
      &lt;span class="na"&gt;for&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;minutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
    &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;homeassistant.turn_off&lt;/span&gt;
      &lt;span class="na"&gt;entity_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;switch.printerpower&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test the pieces
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Print server check
&lt;/h4&gt;

&lt;p&gt;Confirm that you've got the print queue configured correctly by running &lt;code&gt;docker exec -it cupsd-server lpq -P Franklin&lt;/code&gt;. If there are no jobs, it should print something like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Franklin is ready
no entries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Automation test
&lt;/h4&gt;

&lt;p&gt;Reload your automations, and you should now be able to test that the automations are correct by running &lt;code&gt;c-mosquitto_pub -h mqtt.yourdomain.com -t hass/printers/franklin -m OFF&lt;/code&gt; or &lt;code&gt;-m ON&lt;/code&gt; and watch HA turn the power to your printer off and on.&lt;/p&gt;

&lt;p&gt;Once that is working, print a job, and if you run &lt;code&gt;ha-check-for-print-jobs&lt;/code&gt; the printer power should get turned on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Run it all automatically
&lt;/h3&gt;

&lt;p&gt;Now that you've confirmed that the power is being cycled properly when the MQTT queue recieves messages and that the print job checker is seeing the printer queue, we can add the checker job to cron.&lt;/p&gt;

&lt;p&gt;Add&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="nv"&gt;PRINT_Q&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Franklin &lt;span class="nv"&gt;MQTT_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mqtt.example.com &lt;span class="nv"&gt;MQTT_TOPIC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;hass/printers/franklin &lt;span class="nv"&gt;CONTAINER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cupsd_server /usr/local/bin/ha-check-for-print-jobs | logger &lt;span class="nt"&gt;-t&lt;/span&gt; printserver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to your &lt;code&gt;/etc/crontab&lt;/code&gt;, and you're good to go. Now every minute, the checker script will get run by &lt;code&gt;cron&lt;/code&gt;, and it will check every five seconds for print jobs and exit before the next invocation by &lt;code&gt;cron&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>homeassistant</category>
      <category>linux</category>
      <category>mqtt</category>
      <category>printer</category>
    </item>
    <item>
      <title>Setting up Shinobi and a Wyze G2 Camera</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Sun, 14 Mar 2021 08:59:26 +0000</pubDate>
      <link>https://dev.to/unixorn/setting-up-shinobi-and-a-wyze-g2-camera-5bh8</link>
      <guid>https://dev.to/unixorn/setting-up-shinobi-and-a-wyze-g2-camera-5bh8</guid>
      <description>&lt;p&gt;I wanted to set up a security camera outside, but I didn't want to be dependent on an outside cloud service - if my internet goes out, I don't want to lose my ability to record footage.&lt;/p&gt;

&lt;p&gt;Wyze cameras are nice and cheap, and you can reflash them to support RTSP in addition to streaming to the Wyze cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;A camera that supports the RTSP protocol (I'm using a Wyze G2)&lt;/li&gt;
&lt;li&gt;A spare microSD card for reflashing the Wyze G2 camera&lt;/li&gt;
&lt;li&gt;An x86 machine running docker. As of 2021-03-14, Shinobi only publishes an amd64 version of the &lt;a href="https://hub.docker.com/r/shinobisystems/shinobi" rel="noopener noreferrer"&gt;shinobisystems/shinobi&lt;/a&gt; docker image.&lt;/li&gt;
&lt;li&gt;A reasonable amount of disk space - the Wyze G2 I'm using generates around 330 megs per hour of stored 1080p video.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Camera Setup
&lt;/h3&gt;

&lt;p&gt;I wanted a camera that supported the Real Time Streaming Protocol (RTSP) because that is an open standard which works with a wide variety of tooling, both Open Source and commercial.&lt;/p&gt;

&lt;p&gt;I looked at a variety of camera options, and Wirecutter's &lt;a href="https://www.nytimes.com/wirecutter/reviews/best-wi-fi-home-security-camera/" rel="noopener noreferrer"&gt;Best Wifi Home Security Camera&lt;/a&gt; listed the Wyze G2 as runner-up. It and the first choice (Eufy 2K Indoor cam) both support RTSP, but the Wyze was in stock (and half the price at $26) so I went with it.&lt;/p&gt;

&lt;p&gt;I did have to reflash the Wyze G2 to enable a beta firmware that supports both Wyze's cloud and RTSP. Conveniently, it can stream to both simultaneously, so I can watch the streams with the Wyze app when away from home and still record everything to my homelab cluster.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reflash the Wyze Camera
&lt;/h4&gt;

&lt;p&gt;Wyze now has a beta firmware that simultanously supports both their cloud offering and RTSP. Note that they've explicitly stated that the RTSP branch will get features later than the mainline firmware. I personally don't care, but it is something to consider if you're going to want to use bleeding edge features.&lt;/p&gt;

&lt;p&gt;The official instructions for reflashing the G2 camera are on the &lt;a href="https://wyzelabs.zendesk.com/hc/en-us/articles/360026245231-Wyze-Cam-RTSP" rel="noopener noreferrer"&gt;Wyze Cam RTSP page&lt;/a&gt; and clear, so I'm not going to rehash them here. You'll need a FAT32 formatted microSD card to do the firmware reflash.&lt;/p&gt;

&lt;p&gt;After you reflash the camera, you'll need to configure a username/password combination for the camera stream using the Wyze phone app.&lt;/p&gt;

&lt;p&gt;Before you configure the camera, I recommend that you go into your router's configuration and assign the camera a static IP so that your DVR doesn't lose the stream connection when the camera or router are rebooted. You can also hardcode an IP address into the G2 camera's configuration, but I prefer to keep all the static IP assignments for my network in one place, the DHCP configuration on my router.&lt;/p&gt;

&lt;p&gt;You'll end up with a rtsp url that looks like &lt;code&gt;rtsp://username:password@192.168.1.CAMERAIP/live&lt;/code&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  DVR Setup
&lt;/h3&gt;

&lt;p&gt;I don't use any IOT devices that require a cloud service to function. In this case, I especially do not want to be unable to record security footage just because the internet is down, so I set up &lt;a href="https://shinobi.video/" rel="noopener noreferrer"&gt;shinobi&lt;/a&gt; as a local DVR to record my security footage.&lt;/p&gt;

&lt;h4&gt;
  
  
  Start shinobi
&lt;/h4&gt;

&lt;p&gt;I'm running shinobi in a docker container. As of 2021-03-14, there is only an AMD64 build of this docker image so I'm running it on the an Intel machine in my homelab cluster.&lt;/p&gt;

&lt;p&gt;Here's a &lt;a href="https://github.com/unixorn/blog-scripts/blob/master/shinobi/shinobi-start" rel="noopener noreferrer"&gt;shinobi-start&lt;/a&gt; script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Start shinobi&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# Copyright 2021, Joe Block &amp;lt;jpb@unixorn.net&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;#&lt;/span&gt;
&lt;span class="c"&gt;# License: Apache 2.0&lt;/span&gt;

&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="s1"&gt;'/data/shinobi'&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; pipefail
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$DEBUG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-x&lt;/span&gt;
&lt;span class="k"&gt;fi

for &lt;/span&gt;dvr_d &lt;span class="k"&gt;in &lt;/span&gt;config customAutoLoad database plugins video
&lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SHINOBI_D&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$dvr_d&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done

&lt;/span&gt;&lt;span class="nb"&gt;exec &lt;/span&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 8080:8080 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'shinobi'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/config:/config:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/customAutoLoad:/home/Shinobi/libs/customAutoLoad:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/database:/var/lib/mysql:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/plugins:/plugins:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SHINOBI_D&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/videos:/home/Shinobi/videos:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /dev/shm/Shinobi/streams:/dev/shm/streams:rw &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /etc/localtime:/etc/localtime:ro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /etc/timezone:/etc/timezone:ro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--restart&lt;/span&gt; always &lt;span class="se"&gt;\&lt;/span&gt;
  shinobisystems/shinobi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run this with &lt;code&gt;SHINOBI_D=/path/to/local/dvr/files shinobi_start&lt;/code&gt; and it will create any missing required directories for you and start shinobi.&lt;/p&gt;

&lt;h4&gt;
  
  
  Configure shinobi
&lt;/h4&gt;

&lt;h5&gt;
  
  
  First, set up a new admin account
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Login at &lt;a href="http://your.shinobi.server:8080/super" rel="noopener noreferrer"&gt;http://your.shinobi.server:8080/super&lt;/a&gt; with username &lt;a href="mailto:admin@shinobi.vido"&gt;admin@shinobi.vido&lt;/a&gt; and password admin&lt;/li&gt;
&lt;li&gt;Create a new admin account&lt;/li&gt;
&lt;li&gt;Don't forget to reset the password for the &lt;a href="mailto:admin@shinobi.video"&gt;admin@shinobi.video&lt;/a&gt; account!&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Add the camera
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Login at &lt;a href="http://your.shinobi.server:8080" rel="noopener noreferrer"&gt;http://your.shinobi.server:8080&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click on the &lt;strong&gt;+&lt;/strong&gt; icon in the toolbar at the top of the page&lt;/li&gt;
&lt;li&gt;Set mode to record&lt;/li&gt;
&lt;li&gt;Change the name to something human friendly like "Mailbox Camera"&lt;/li&gt;
&lt;li&gt;Set input type (in the connection section) to &lt;code&gt;H.264 / H.265 / H.265+&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Set the full URL path to the rtsp stream url you got from the camera&lt;/li&gt;
&lt;li&gt;Optionally set Skip Ping to Yes&lt;/li&gt;
&lt;li&gt;Set Stream Type to &lt;code&gt;HLS (includes audio)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Set Record File Type to &lt;code&gt;MP4&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Set Video codec to &lt;code&gt;copy&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Set Audio Codec to &lt;code&gt;Auto&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Save&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can optionally set retention times for the camera data.&lt;/p&gt;

&lt;p&gt;It took about 30-45 seconds before my camera stream was visible in shinobi.&lt;/p&gt;

</description>
      <category>shinobi</category>
      <category>dvr</category>
      <category>docker</category>
      <category>iot</category>
    </item>
    <item>
      <title>Growing EBS Volumes in Place</title>
      <dc:creator>Joe Block</dc:creator>
      <pubDate>Tue, 20 Aug 2019 13:10:27 +0000</pubDate>
      <link>https://dev.to/unixorn/growing-ebs-volumes-in-place-3kd5</link>
      <guid>https://dev.to/unixorn/growing-ebs-volumes-in-place-3kd5</guid>
      <description>&lt;p&gt;Yesterday I had to grow a live filesystem on a server in EC2, without downtime. I do this just infrequently enough to not &lt;em&gt;quite&lt;/em&gt; remember all the details without poking around the internet, so I’m documenting it all in one place.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grow the volume
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Log into the EC2 console and find your instance. In the description tab, look at the block devices (bottom right as of August 2019) and find the volume you need to grow and get its volume ID.&lt;/li&gt;
&lt;li&gt;Find that volume in the EBS volumes list. Now is a good time to name it something useful like "InstanceName /data01" if you haven't already named it.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Modify Volume&lt;/strong&gt;, then give it a new size. It may take a minute or two to finish growing the volume, you'll see a percentage displayed.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Resize the filesystem
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Log into the instance and start a &lt;code&gt;tmux&lt;/code&gt; or &lt;code&gt;screen&lt;/code&gt; session to do all the work in. Getting disconnected in the middle of resizing the filesystem would be bad.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;lsblk&lt;/code&gt; to confirm that the EBS block device has increased to the size you expect.&lt;/li&gt;
&lt;li&gt;If you have partitioned your drive, do &lt;code&gt;sudo growpart /dev/xyz1 0&lt;/code&gt; to grow the partition.&lt;/li&gt;
&lt;li&gt;Check &lt;code&gt;/etc/fstab&lt;/code&gt; to see what format the filesystem is.&lt;/li&gt;
&lt;li&gt;If you're using xfs, &lt;code&gt;sudo xfs_growfs /dev/DEVICE&lt;/code&gt;. If you're using &lt;strong&gt;ext2&lt;/strong&gt;, &lt;em&gt;ext3&lt;/em&gt; or &lt;em&gt;ext4&lt;/em&gt;, do &lt;code&gt;sudo resize2fs /dev/DEVICE&lt;/code&gt;. If you're using &lt;strong&gt;ext2&lt;/strong&gt; or &lt;strong&gt;ext3&lt;/strong&gt;, seriously consider replacing this filesystem with an &lt;strong&gt;ext4&lt;/strong&gt; one during your next downtime window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wait&lt;/strong&gt;. Depending on how much larger the EBS volume has become and the instance type, it can take several minutes for the filesystem to finish growing.&lt;/li&gt;
&lt;li&gt;Confirm the new size with &lt;code&gt;df&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>ebs</category>
      <category>linux</category>
    </item>
  </channel>
</rss>
