<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 3deep5me</title>
    <description>The latest articles on DEV Community by 3deep5me (@3deep5me).</description>
    <link>https://dev.to/3deep5me</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/3deep5me"/>
    <language>en</language>
    <item>
      <title>From Zero to Scale: Proxmox Kubernetes Engine</title>
      <dc:creator>3deep5me</dc:creator>
      <pubDate>Fri, 02 May 2025 13:53:31 +0000</pubDate>
      <link>https://dev.to/3deep5me/from-zero-to-scale-kubernetes-on-proxmox-the-scaling-autopilot-method-1l64</link>
      <guid>https://dev.to/3deep5me/from-zero-to-scale-kubernetes-on-proxmox-the-scaling-autopilot-method-1l64</guid>
      <description>&lt;p&gt;Today, we'll take a beginner-friendly look at housing Kubernetes in Proxmox. But instead of the traditional SSH/Ansible approach, we'll explore a method akin to what you'd find with AWS, Azure, or GCP. This means we're talking about scaling from tens to hundreds of Kubernetes clusters in minutes, with automated, reproducible cluster creation and upgrades.&lt;/p&gt;

&lt;p&gt;Does that sound like it requires heavy modifications to your Proxmox hosts or datacenter? I can reassure you: I dislike straying far from default settings, so &lt;strong&gt;you won't need to modify your Proxmox installation in any way&lt;/strong&gt;. It's simply a virtual machine, allowing you to add and remove it like a plugin. (More on the architecture later.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do You Need This?
&lt;/h2&gt;

&lt;p&gt;I don't need this personally — it's just a bit of fun to replicate big cloud provider functionality in my tiny homelab!&lt;/p&gt;

&lt;p&gt;However, if your company lacks a scalable Kubernetes platform, you might find it tough to keep up in today's service-oriented world. With major cloud providers dominating, efficiently managing a private cloud is more crucial than ever, and Kubernetes is one of the most popular cloud tools. So, how can you compete? The answer lies in two of my favorite open-source projects: Cluster-API and Proxmox.&lt;/p&gt;

&lt;h3&gt;
  
  
  Proxmox
&lt;/h3&gt;

&lt;p&gt;Proxmox, developed in Vienna, is widely known in the open-source and home-lab communities. It gained a huge boost in small to medium-scale private clouds after VMware's new pricing model alienated its customers. It's a simple yet powerful open-source hypervisor based on KVM, and it's been the core of my home lab for nearly a decade (I've been using it since PVE 4).&lt;/p&gt;

&lt;h3&gt;
  
  
  Cluster-API
&lt;/h3&gt;

&lt;p&gt;While Cluster API might be less known in the home-lab community, it's highly valued in enterprises and by Kubernetes administrators and enthusiasts. So, what exactly is Cluster API?&lt;/p&gt;

&lt;p&gt;Cluster API is a project under the Cloud Native Computing Foundation (CNCF), strongly supported by the Kubernetes community and various vendors, including VMware, Apple, and NVIDIA. This project offers a unified way to create and manage Kubernetes clusters across different "providers," such as Proxmox or VMware. For instance, VMware heavily leverages Cluster API in its commercial product, Tanzu.&lt;/p&gt;

&lt;p&gt;Cluster API currently boasts over 30 infrastructure providers, with Proxmox being just one of them. In short, Cluster API provides a unified API and method for creating production-ready Kubernetes clusters across numerous providers. It has become, at least for me, the de facto standard for multi-cloud and on-premises Kubernetes deployments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtlnxjzkz6m11w5rlrsv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtlnxjzkz6m11w5rlrsv.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me introduce you to the architecture we will use for our setup. Of course, we need at least one Proxmox host to rely on; I'll be using the newest version. On top of Proxmox, we will have our main component: the Management VM. The Management VM will house Cluster API.&lt;/p&gt;

&lt;p&gt;The Management VM is then responsible for the Kubernetes clusters we want to create. For example, the Management VM automatically coordinates the creation and updates of Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;The Management VM achieves this by instructing Proxmox to automatically create and remove specially configured VMs on your Proxmox infrastructure. The process in detail would look like this:&lt;/p&gt;

&lt;p&gt;We will give our Management VM the order to create, for instance, a 12-node Kubernetes cluster. The Management VM, or more precisely, Cluster API inside the Management VM, will communicate with Proxmox. Cluster API will then instruct Proxmox to provision 12 VMs from a special Kubernetes VM template. After the successful creation of the VMs, Cluster API configures the cluster. Additionally, Cluster API will monitor the VMs and can replace them automatically upon failure.&lt;/p&gt;

&lt;p&gt;You can see that this setup is far more capable than statically creating clusters with tools like Terraform and Ansible. This is why setups like this are often referred to as a Kubernetes Platform - because you can order clusters just like you order a pizza from Lieferando.&lt;/p&gt;

&lt;p&gt;In my opinion, it's also a less complicated way to set up multi-node Kubernetes clusters on Proxmox. We simply need to install Cluster API on our Management VM and have a Kubernetes VM template ready. No complex Ansible Playbooks or Terraform-states are required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our Path to a Scalable Kubernetes Platform on Proxmox
&lt;/h3&gt;

&lt;p&gt;In this blog post, we'll follow these steps to achieve our goal of creating a scalable Kubernetes platform on Proxmox:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create the Management VM as our pivotal point.&lt;/li&gt;
&lt;li&gt;Create a Kubernetes VM Template for our Kubernetes Clusters.&lt;/li&gt;
&lt;li&gt;Initialize Cluster API &amp;amp; Configure the "Caprox-Kubernetes-Engine."&lt;/li&gt;
&lt;li&gt;Create our first Workload Cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In my example, the Management VM and Kubernetes clusters will all reside on a single Proxmox host within one network that utilizes DHCP. DHCP is a requirement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Guide still works - but it is a bit outdated. I recommend to check out the GitHub Project which evolved out of this blogpost. Here is the updated Tutorial: &lt;a href="https://github.com/Caprox-eu/Proxmox-Kubernetes-Engine/blob/main/docs/quick-start.md" rel="noopener noreferrer"&gt;Proxmox-Kubernetes-Engine/Quick-Start&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Management VM with k3sup
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;: All the steps shown here are for home-lab purposes only and &lt;strong&gt;should not be used for a production or even development environment&lt;/strong&gt;. This guide is merely a starting point. If you're looking for a more production-ready setup, feel free to reach out to me.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a New VM for Our Management VM
&lt;/h3&gt;

&lt;p&gt;We need a simple Linux VM for our &lt;strong&gt;Management VM&lt;/strong&gt; with &lt;strong&gt;30 GiB of disk space&lt;/strong&gt; and at least &lt;strong&gt;4 GiB of RAM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For Proxmox beginners, I recommend &lt;a href="https://support.us.ovhcloud.com/hc/en-us/articles/360010916620-How-to-Create-a-VM-in-Proxmox-VE" rel="noopener noreferrer"&gt;this great guide&lt;/a&gt;. You'll also need an ISO image, which you can download &lt;a href="https://ubuntu.com/download/server/thank-you?version=24.04.2&amp;amp;architecture=amd64&amp;amp;lts=true" rel="noopener noreferrer"&gt;here&lt;/a&gt; from the official Ubuntu website.&lt;/p&gt;

&lt;p&gt;Of course, you can also use other methods such as Terraform, cloud-init or cloning an existing Linux VM or template.&lt;/p&gt;

&lt;p&gt;Once you've logged into your new VM, we can proceed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Install Kubernetes (K3s) on Our New VM
&lt;/h3&gt;

&lt;p&gt;Cluster API is Kubernetes-exclusive, meaning it itself relies on Kubernetes to operate. Because of this, we need Kubernetes on our Management VM. In this guide, we'll use &lt;strong&gt;K3s&lt;/strong&gt; for that. K3s is a Kubernetes distribution from Rancher that is now fully community-driven and under the CNCF. It's a popular Kubernetes distro for small to mid-sized clusters.&lt;/p&gt;

&lt;p&gt;To install K3s, we'll use &lt;strong&gt;&lt;code&gt;k3sup&lt;/code&gt;&lt;/strong&gt; - a really simple CLI tool that allows you to create a K3s Kubernetes cluster within seconds on any Linux VM. The following commands will download &lt;code&gt;k3sup&lt;/code&gt; and then create a simple cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install k3s-Kubernetes with k3sup&lt;/span&gt;
curl &lt;span class="nt"&gt;-sLS&lt;/span&gt; https://get.k3sup.dev | sh
&lt;span class="nb"&gt;sudo cp &lt;/span&gt;k3sup /usr/local/bin/k3sup
k3sup &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nt"&gt;--k3s-version&lt;/span&gt; v1.33.1+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can test if the cluster is working with &lt;code&gt;sudo k3s kubectl get nodes&lt;/code&gt;. It should show something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl get nodes
NAME        STATUS   ROLES                  AGE     VERSION
localhost   Ready    control-plane,master   6m30s   v1.33.1+k3s1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! You've successfully set up a simple single-node Kubernetes cluster. This cluster will serve as our Cluster API Management VM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Your Kubernetes VM Template
&lt;/h2&gt;

&lt;p&gt;Every Kubernetes cluster typically starts with a machine template, most often a &lt;strong&gt;VM template&lt;/strong&gt;. This template is where all Kubernetes components are configured and downloaded. For Cluster API, we need VM templates that already include essential Kubernetes packages like the API server and the container runtime.&lt;/p&gt;

&lt;p&gt;We'll achieve this with the help of the excellent &lt;a href="https://github.com/kubernetes-sigs/image-builder/tree/main" rel="noopener noreferrer"&gt;Kubernetes Image Builder&lt;/a&gt; project. While it sounds fancy, it's essentially &lt;strong&gt;Packer and Ansible&lt;/strong&gt; cleverly glued together to automate the image creation process. But as promised, you don't need to touch Ansible or Packer at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create an API Token and a Secret with the Values
&lt;/h3&gt;

&lt;p&gt;To get started, we need to create API access to our Proxmox Datacenter. You might wonder why. The image builder constructs the image directly on your Proxmox node, so it requires access to start a VM and later convert it into a template.&lt;/p&gt;

&lt;p&gt;To begin, open a shell on your Proxmox node. You can do this by clicking on your Proxmox node in the interface and selecting "Shell"; a command-line interface will then open directly in your browser. Once the shell is open, execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pveum user add caprox@pve
pveum aclmod / &lt;span class="nt"&gt;-user&lt;/span&gt; caprox@pve &lt;span class="nt"&gt;-role&lt;/span&gt; PVEAdmin
pveum user token add caprox@pve capi &lt;span class="nt"&gt;-privsep&lt;/span&gt; 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that you should get all relevant information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@node01:~# pveum user token add caprox@pve capi &lt;span class="nt"&gt;-privsep&lt;/span&gt; 0
┌──────────────┬──────────────────────────────────────┐
│ key          │ value                                │
╞══════════════╪══════════════════════════════════════╡
│ full-tokenid │ caprox@pve!capi                      │
├──────────────┼──────────────────────────────────────┤
│ info         │ &lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"privsep"&lt;/span&gt;:&lt;span class="s2"&gt;"0"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;                      │
├──────────────┼──────────────────────────────────────┤
│ value        │ 6e59df15-a2c9-4dc5-b293-367772950c68 │
└──────────────┴──────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will also use our Management VM to coordinate the VM Template Builds. For that we will use the Kubernetes Job Resource and a Kubernetes Secret to pass the API-Key and some configuration.&lt;/p&gt;

&lt;p&gt;First let us create the Secret with values we got from above.&lt;br&gt;
You only need to change the first four Proxmox-Vars.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# secret.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-image-build-config&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-build-infrastructure-system&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://your-proxmox-adress:8006/api2/json"&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_USERNAME&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;full-tokenid"&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;value"&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_NODE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;proxmox&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;node&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;name"&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_ISO_POOL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local"&lt;/span&gt; &lt;span class="c1"&gt;#this should be fine for the most users&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_BRIDGE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vmbr0"&lt;/span&gt; &lt;span class="c1"&gt;#this should be fine for the most users&lt;/span&gt;
  &lt;span class="na"&gt;PROXMOX_STORAGE_POOL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local"&lt;/span&gt; &lt;span class="c1"&gt;#this should be fine for the most users&lt;/span&gt;
  &lt;span class="na"&gt;PACKER_FLAGS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--var&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;memory=4096&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--var&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'kubernetes_rpm_version=1.33.1'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--var&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'kubernetes_semver=v1.33.1'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--var&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'kubernetes_series=v1.33'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--var&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'kubernetes_deb_version=1.33.1-1.1'"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure needed values and save the File.&lt;/p&gt;

&lt;p&gt;Now we need also a Job which creates the Image. A Job is a Kubernetes Pod which is only executed once with a finite life - until the Job successfully complete. It uses Image-Builder Docker-Image with the required configuration to build Kubernetes Proxmox VM-Templates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# job.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CronJob&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-template-builder&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-build-infrastructure-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*"&lt;/span&gt;
  &lt;span class="na"&gt;suspend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; 
  &lt;span class="na"&gt;jobTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;hostNetwork&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OnFailure&lt;/span&gt;
          &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-builder&lt;/span&gt;
            &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.k8s.io/scl-image-builder/cluster-node-image-builder-amd64:v0.1.45&lt;/span&gt;
            &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-image-build-config&lt;/span&gt;
            &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;build-proxmox-ubuntu-2404&lt;/span&gt;
            &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# help for slow nodes&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ANSIBLE_TIMEOUT&lt;/span&gt;
                &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;60"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just copy and save the file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Start the Build-Job and create a Kubernetes Image
&lt;/h3&gt;

&lt;p&gt;Now you should have two file one Secret and one (Cron)-Job, the Cron-Job has the benefit of easier executing if we need new image or even do scheduled automatic builds. Now its build time!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create a namespace for the build&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl create namespace proxmox-build-infrastructure-system
&lt;span class="c"&gt;# apply secret &amp;amp; job&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; secret.yaml
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; job.yaml
&lt;span class="c"&gt;# start the build&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl create job build-image &lt;span class="nt"&gt;--from&lt;/span&gt; cj/proxmox-template-builder &lt;span class="nt"&gt;-n&lt;/span&gt; proxmox-build-infrastructure-system 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To Monitor the current state we can look at the Logs of the Builder-Pod. For that we need the name and the &lt;code&gt;kubectl logs&lt;/code&gt; command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; proxmox-build-infrastructure-system
NAME                READY   STATUS    RESTARTS   AGE
build-image-tdv78   1/1     Running   0          97s
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; proxmox-build-infrastructure-system build-image-tdv78
proxmox-iso.ubuntu-2204: output will be &lt;span class="k"&gt;in &lt;/span&gt;this color.
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; proxmox-iso.ubuntu-2204: Retrieving ISO
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a new VM pop-up in your Proxmox UI. This is often the trickiest part of the guide. Sometimes it can take a long time, or the job might even require multiple attempts. Kubernetes will automatically restart the job if the build fails, so you can just relax and let it do its thing. In my experience, it occasionally took over 30 minutes to create the VM template.&lt;/p&gt;

&lt;p&gt;Please ensure that you have DHCP configured in your network. If you encounter any issues, feel free to leave a comment below; I'm happy to help! Sometimes a Proxmox reboot also helps.&lt;/p&gt;

&lt;p&gt;If you're lucky, you should see a new template in the UI after a few minutes.&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febymhlsjwkqa2o49bhek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febymhlsjwkqa2o49bhek.png" alt=" " width="480" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And thats it - you completed the "most complicated" part!&lt;/p&gt;

&lt;h2&gt;
  
  
  Initialize Cluster-API
&lt;/h2&gt;

&lt;p&gt;In the next step, we'll configure Cluster-API to our needs. I prefer to avoid keeping files, especially complex ones, directly on my PC. For this reason, we'll use &lt;strong&gt;ArgoCD&lt;/strong&gt; with a &lt;strong&gt;GitOps workflow&lt;/strong&gt; to install Cluster-API. New to GitOps and ArgoCD? Don't worry, it's straightforward!&lt;/p&gt;

&lt;h3&gt;
  
  
  Install ArgoCD and get cluster-api
&lt;/h3&gt;

&lt;p&gt;ArgoCD will retrieve the YAML configurations for the various Cluster-API components. You can install ArgoCD using these two commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl create namespace argocd
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the status of argocd with this command &lt;code&gt;sudo k3s kubectl get pods -n argocd&lt;/code&gt; if all pods are running you are ready for the next step.&lt;/p&gt;

&lt;p&gt;We will install these Cluster-API components as ArgoCD Applications. An ArgoCD Application is similar to something you might know from your Google Play Store or Apple App Store – it's a packaged application or set of applications. Just as you download and install an app on your phone, ArgoCD manages the deployment and lifecycle of these Cluster-API components onto your Kubernetes clusters.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes-sigs/cluster-api-operator" rel="noopener noreferrer"&gt;Cluster-API Operator&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;The operator is responsible for installing Cluster-API and configuring it for Proxmox.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://cert-manager.io/" rel="noopener noreferrer"&gt;Cert-Manager&lt;/a&gt;.

&lt;ul&gt;
&lt;li&gt;Cert-Manager is utilized by Cluster-API to automatically generate and manage certificates required for the Kubernetes clusters.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;a href="https://github.com/Caprox-eu/Proxmox-Kubernetes-Engine/tree/main/manifests/clusterclass-cilium-with-shared-ippool/base" rel="noopener noreferrer"&gt;Caprox-Engine&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;This application includes my opinionated ClusterClass, which has been designed with the needs of home labs and Small and Medium-sized Businesses (SMBs) in mind. A ClusterClass acts as a customizable template, significantly simplifying the process of creating and managing Kubernetes clusters.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app-cert-manager.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-api-operator-cert-manager&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt; 
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;capi-operator-system&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://charts.jetstack.io&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.17.2&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;installCRDs: true&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ServerSideApply=true&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app-cluster-api.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-api-operator-main&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;capi-operator-system&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes-sigs.github.io/cluster-api-operator&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.19.0&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-api-operator&lt;/span&gt;
    &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;manager:&lt;/span&gt;
          &lt;span class="s"&gt;featureGates:&lt;/span&gt;
            &lt;span class="s"&gt;proxmox:&lt;/span&gt;
              &lt;span class="s"&gt;ClusterTopology: true&lt;/span&gt;
            &lt;span class="s"&gt;core:&lt;/span&gt;
              &lt;span class="s"&gt;ClusterTopology: true&lt;/span&gt;
            &lt;span class="s"&gt;kubeadm:&lt;/span&gt;
              &lt;span class="s"&gt;ClusterTopology: true&lt;/span&gt;
        &lt;span class="s"&gt;core:&lt;/span&gt;
          &lt;span class="s"&gt;cluster-api:&lt;/span&gt;
            &lt;span class="s"&gt;enabled: true&lt;/span&gt;
            &lt;span class="s"&gt;version: v1.10.2&lt;/span&gt;
        &lt;span class="s"&gt;bootstrap:&lt;/span&gt;
          &lt;span class="s"&gt;kubeadm: &lt;/span&gt;
            &lt;span class="s"&gt;enabled: true&lt;/span&gt;
            &lt;span class="s"&gt;version: v1.10.2&lt;/span&gt;
        &lt;span class="s"&gt;controlPlane: &lt;/span&gt;
          &lt;span class="s"&gt;kubeadm: &lt;/span&gt;
            &lt;span class="s"&gt;enabled: true&lt;/span&gt;
            &lt;span class="s"&gt;version: v1.10.2&lt;/span&gt;
        &lt;span class="s"&gt;infrastructure: &lt;/span&gt;
          &lt;span class="s"&gt;proxmox:&lt;/span&gt;
            &lt;span class="s"&gt;enabled: true&lt;/span&gt;
            &lt;span class="s"&gt;version: v0.7.1&lt;/span&gt;
        &lt;span class="s"&gt;ipam:&lt;/span&gt;
          &lt;span class="s"&gt;in-cluster:&lt;/span&gt;
            &lt;span class="s"&gt;enabled: true&lt;/span&gt;
            &lt;span class="s"&gt;version: v1.0.1&lt;/span&gt;
        &lt;span class="s"&gt;addon:&lt;/span&gt;
          &lt;span class="s"&gt;helm: &lt;/span&gt;
            &lt;span class="s"&gt;enabled: true&lt;/span&gt;
            &lt;span class="s"&gt;version: v0.3.1&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ServerSideApply=true&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# app-caprox-engine.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-api-operator-caprox-engine&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt; 
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;resources-finalizer.argocd.argoproj.io&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;capi-operator-system&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/Caprox-eu/Proxmox-Kubernetes-Engine.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;99c87250802d886cfce28fe20a313637eae8a80a&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manifests/clusterclass-cilium-with-shared-ippool/base&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CreateNamespace=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ServerSideApply=true&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;ignoreDifferences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster.x-k8s.io&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterClass&lt;/span&gt;
      &lt;span class="na"&gt;jsonPointers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/spec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save these files and apply them one by one. This sequential application is necessary because the applications depend on each other, and we need to ensure each one is ready before proceeding to the next.&lt;/p&gt;

&lt;p&gt;Lets start with the Cert-Manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; app-cert-manager.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until the App is Heathly and synced.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl get apps &lt;span class="nt"&gt;-A&lt;/span&gt;
NAME                                 SYNC STATUS   HEALTH STATUS
cluster-api-operator-cert-manager    Synced        Healthy
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then go on with Cluster-API.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; app-cluster-api.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until the App is Heathly and synced.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl get apps &lt;span class="nt"&gt;-A&lt;/span&gt;
NAME                                 SYNC STATUS   HEALTH STATUS
cluster-api-operator-cert-manager    Synced        Healthy
cluster-api-operator-main            Synced        Healthy
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then with the caprox-engine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; app-caprox-engine.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait until the App is Heathly and synced.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl get apps &lt;span class="nt"&gt;-A&lt;/span&gt;
NAME                                 SYNC STATUS   HEALTH STATUS
cluster-api-operator-caprox-engine   Synced        Healthy
cluster-api-operator-cert-manager    Synced        Healthy
cluster-api-operator-main            Synced        Healthy
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nice! Now we need to configure cluster-api to our specific environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connect Cluster-API to Proxmox
&lt;/h3&gt;

&lt;p&gt;Cluster-API needs access to your Proxmox cluster to create VMs for Kubernetes clusters. For a homelab setup, we can simply reuse the Proxmox credentials you created in the "Create an API Token and Create a Secret with the Values" step.&lt;/p&gt;

&lt;p&gt;Create a secret with the exact same structure, configure it for your environment, then save and apply it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# secret-capmox.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;6e59df15-a2c9-4dc5-b293-367772950c68"&lt;/span&gt;
  &lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;caprox@pve!capi"&lt;/span&gt;
  &lt;span class="c1"&gt;# note: only the host - not the api-path&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://192.168.2.142:8006/"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;capmox-manager-credentials&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-infrastructure-system&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; secret-capmox.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because we are overwriting the default secret we need to trigger a restart of the proxmox-provider to read the new secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl rollout restart deploy capmox-controller-manager &lt;span class="nt"&gt;-n&lt;/span&gt; proxmox-infrastructure-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure IP Range for the Cluster-API Caprox Kubernetes Engine
&lt;/h3&gt;

&lt;p&gt;The Virtual Machines (VMs) provisioned by Proxmox/Cluster-API will require IP addresses. Currently, DHCP support is &lt;a href="https://github.com/ionos-cloud/cluster-api-provider-proxmox/issues/29" rel="noopener noreferrer"&gt;not available&lt;/a&gt;. However, we can specify an IP pool for our Kubernetes VMs. In my setup, I've disabled DHCP for a range of IPs within my FritzBox. (Note: It's recommended to disable DHCP for more IPs than you initially anticipate needing – more on this later.)&lt;/p&gt;

&lt;p&gt;Copy this file, configure it for your network, then save and apply it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ip-pool.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ipam.cluster.x-k8s.io/v1alpha2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;InClusterIPPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;clusterclass-ipv4&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caprox-kubernetes-engine&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Change the IP range to match your needs&lt;/span&gt;
  &lt;span class="c1"&gt;# These IPs will be used for the Kubernetes nodes&lt;/span&gt;
  &lt;span class="c1"&gt;# Also configure your network prefix and gateway accordingly&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;192.168.2.150-192.168.2.199&lt;/span&gt;
  &lt;span class="na"&gt;gateway&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.2.1&lt;/span&gt;
  &lt;span class="na"&gt;prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;24&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; ip-pool.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! We've successfully installed ArgoCD on our Management VM, used ArgoCD to install Cluster-API, and configured Cluster-API for our environment. Now we're finally ready to create our first multi-node Kubernetes cluster!&lt;/p&gt;

&lt;h2&gt;
  
  
  Create our first Workload Cluster
&lt;/h2&gt;

&lt;p&gt;As always everthing is a file - same is true for a Kubernetes Cluster in Cluster-API. So yeah we will create a Kubernetes Cluster from Cluster YAML. The file will be later proccesed by Cluster-API within our Management VM.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Cluster Resource
&lt;/h3&gt;

&lt;p&gt;A cluster configuration which is compatible with our setup could look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# cluster.yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster.x-k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;caprox.eu/cni&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium-v1.17.4&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;manuels-k8s-cluster&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;caprox-kubernetes-engine&lt;/span&gt; 
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;topology&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-clusterclass-cilium-v0.1.0&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.33.1&lt;/span&gt;
    &lt;span class="na"&gt;controlPlane&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;workers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;machineDeployments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-worker&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;proxmox-worker-pool&lt;/span&gt;
        &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
    &lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cloneSpec&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;vmTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;sourceNode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;node01&lt;/span&gt;
          &lt;span class="na"&gt;templateID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;114&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controlPlaneEndpoint&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;192.168.2.201&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fields you &lt;strong&gt;must&lt;/strong&gt; change are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;vmTemplate&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;sourceNode&lt;/code&gt;&lt;/strong&gt;: Set this to the name of the Proxmox node where your VM template is located.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;templateID&lt;/code&gt;&lt;/strong&gt;: This is the ID of your Kubernetes template. You can find it in the Proxmox UI, listed as the number next to the template name.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;controlPlaneEndpoint&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This IP address will be used to access your Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;Ensure it's outside your defined IP pool range.&lt;/li&gt;
&lt;li&gt;This is a floating IP, shared among different nodes. This ensures your cluster remains reachable even during node failures or maintenance, as long as at least one node is available.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;The fields you &lt;strong&gt;can&lt;/strong&gt; change are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;&lt;code&gt;controlplane.replicas&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This specifies the number of VMs for your control plane (master) nodes. These nodes host Kubernetes control plane components like the Kubernetes API and etcd (the Kubernetes database).&lt;/li&gt;
&lt;li&gt;Set this to &lt;code&gt;1&lt;/code&gt; for a non-HA (High Availability) setup, or &lt;code&gt;3&lt;/code&gt; or &lt;code&gt;5&lt;/code&gt; for an HA setup.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;workers.replicas&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This determines the number of VMs that will run your workloads.&lt;/li&gt;
&lt;li&gt;Set this to at least &lt;code&gt;1&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;metadata.name&lt;/code&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is the name of your Kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;For more configuration options, including memory and CPU settings for your VMs, refer to the available &lt;strong&gt;variables&lt;/strong&gt; &lt;a href="https://github.com/3deep5me/kubernetes-gitops/blob/cluster-api-action/k3s-mgmt-proxmox/manifest/capmox-setup/cilium-clusterclass-with-shared-ippool/base/variables/clonespec.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After you've set the required fields, remember to save the file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling and New Clusters
&lt;/h3&gt;

&lt;p&gt;If you want to &lt;strong&gt;scale out&lt;/strong&gt; your current cluster, simply adjust the &lt;strong&gt;node replica counts&lt;/strong&gt; in your configuration file. Then, reapply the changes using &lt;code&gt;kubectl apply -f cluster.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To create an &lt;strong&gt;additional cluster&lt;/strong&gt;, create a new file with a different &lt;strong&gt;&lt;code&gt;metadata.name&lt;/code&gt;&lt;/strong&gt; and set your desired node replica counts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create the Workload Cluster and Retrieve the Kubeconfig
&lt;/h3&gt;

&lt;p&gt;To create the cluster, we need to "send" our cluster configuration to our Management VM. This action will kick off the cluster creation process, which is often referred to as &lt;strong&gt;reconciliation&lt;/strong&gt;. After the first control plane VM is created and started, we can retrieve the &lt;strong&gt;kubeconfig&lt;/strong&gt; for our new cluster. The kubeconfig file contains all the necessary login information to access the created cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create the Cluster&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; cluster.yaml

&lt;span class="c"&gt;# Retrieve and save the kubeconfig&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;k3s kubectl get secrets &lt;span class="nt"&gt;-n&lt;/span&gt; caprox-kubernetes-engine manuels-k8s-cluster-kubeconfig &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.data.value}'&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; kubeconfig.yaml

&lt;span class="c"&gt;# Connect to the new cluster&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;./kubeconfig.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can verify the connection by listing the nodes in your new cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# This command should now show your multiple nodes, depending on your configuration.&lt;/span&gt;
kubectl get nodes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes
NAME                                            STATUS   ROLES           AGE     VERSION
manuels-k8s-cluster-control-plane-4z458-ghx2z   Ready    control-plane   11m     v1.33.1
manuels-k8s-cluster-worker-6vhjx-c2nv2-v4dbw    Ready    node            5m23s   v1.33.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to interact with the Management VM/Cluster API again, you'll need to unset the environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;unset &lt;/span&gt;KUBECONFIG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations! You've successfully completed the whole process. You can now &lt;strong&gt;dynamically create Kubernetes clusters in your Proxmox datacenter in an API-driven way.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you'd like a guide on how to update your existing cluster, please let me know in the comments!&lt;/p&gt;

&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;There are many reasons why cluster creation might fail. I've tried to address all common errors within the template and reduce the setup steps to make the process as lean as possible. All versions and dependencies are fixed. I've tested this guide on multiple Proxmox datacenters and also asked friends to test it, and it has been successful every time.&lt;/p&gt;

&lt;p&gt;However, if you encounter problems, it's likely that you either missed a step or didn't follow the guide exactly as described. Please feel free to leave a comment if you struggle or don't understand any steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Next?
&lt;/h3&gt;

&lt;p&gt;Now that your Kubernetes engine is configured, what can you do with it? Here are some common next steps I like to take on newly created Kubernetes clusters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://dev.to/3deep5me/using-1password-with-external-secrets-operator-in-a-gitops-way-4lo4"&gt;Set up 1Password with a external secret manager&lt;/a&gt; for secure secret management.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install an ingress controller&lt;/strong&gt; to manage external access to cluster services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure load balancing in Cilium&lt;/strong&gt; for multi-node load balancing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install applications&lt;/strong&gt; like databases or Pi-hole using Helm.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Trivia
&lt;/h3&gt;

&lt;p&gt;For this blog post, the following PRs/comments/issues were contributed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/Caprox-eu/Proxmox-Kubernetes-Engine" rel="noopener noreferrer"&gt;https://github.com/Caprox-eu/Proxmox-Kubernetes-Engine&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/kubernetes-sigs/image-builder/pull/1778" rel="noopener noreferrer"&gt;https://github.com/kubernetes-sigs/image-builder/pull/1778&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ionos-cloud/cluster-api-provider-proxmox/pull/499" rel="noopener noreferrer"&gt;https://github.com/ionos-cloud/cluster-api-provider-proxmox/pull/499&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ionos-cloud/cluster-api-provider-proxmox/issues/492" rel="noopener noreferrer"&gt;https://github.com/ionos-cloud/cluster-api-provider-proxmox/issues/492&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/kubernetes-sigs/image-builder/issues/1762" rel="noopener noreferrer"&gt;https://github.com/kubernetes-sigs/image-builder/issues/1762&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>proxmox</category>
      <category>homelab</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using 1Password with External Secrets Operator in a GitOps way</title>
      <dc:creator>3deep5me</dc:creator>
      <pubDate>Tue, 30 Jul 2024 15:35:57 +0000</pubDate>
      <link>https://dev.to/3deep5me/using-1password-with-external-secrets-operator-in-a-gitops-way-4lo4</link>
      <guid>https://dev.to/3deep5me/using-1password-with-external-secrets-operator-in-a-gitops-way-4lo4</guid>
      <description>&lt;p&gt;I recently created a &lt;a href="https://gist.github.com/3deep5me/86ce9a0a2691d21d69684b01432bc1f6" rel="noopener noreferrer"&gt;free Kubernetes cluster&lt;/a&gt; on Oracles Always Free Tier. But how to handle secrets in a secure way, especially in a GitOps scenario?&lt;/p&gt;

&lt;p&gt;This post will describe the full journey how to integrate the External Secrets Operator with 1Password to automatically inject secrets from 1Password to your cluster. For managing the different components, I will use argocd, but you can use also just helm install or FluxCD.&lt;/p&gt;

&lt;p&gt;My Goal was to have a enterprise-known solution which integrates with different secret-managment tools, in my case it's 1Password.&lt;/p&gt;

&lt;h1&gt;
  
  
  External Secrets Operator
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;External Secrets Operator is a Kubernetes operator that integrates external secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault, IBM Cloud Secrets Manager, CyberArk Conjur and many more. The operator reads information from external APIs and automatically injects the values into a Kubernetes Secret.&lt;/em&gt;&lt;br&gt;
-external-secrets.io&lt;/p&gt;

&lt;p&gt;I like the fact that i can theoretically change my secret-management-tool without touching my secrets. Its also possible to have the same Secret-File in all my deployments over different stages. Because the secret-value gets filled regarding to the current stage. These are all things which were often a pain for me with sealed-secrets.&lt;/p&gt;
&lt;h1&gt;
  
  
  1Password
&lt;/h1&gt;

&lt;p&gt;Really comfy Password-manager.&lt;br&gt;
I have it so why not also using it for K8s?&lt;/p&gt;
&lt;h1&gt;
  
  
  Architecture
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxvfwirc35mnc91kdutz.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxvfwirc35mnc91kdutz.webp" alt="Image description" width="591" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us have a look on the architecture especially the mechanics between external secret operator with Kubernetes and 1Password with the external secret operator.&lt;/p&gt;
&lt;h2&gt;
  
  
  External Secret Operator
&lt;/h2&gt;

&lt;p&gt;The External Secret Operator uses in general two &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="noopener noreferrer"&gt;Custom Resources&lt;/a&gt;. The first is the &lt;strong&gt;SecretStore&lt;/strong&gt; which holds information, how to connect to the "Secret Backend". The "Secret Backend" can be something like AWS Secrets Manager, Scaleway or like in our case 1Password. It holds for example the URL and the authentication for it. You can have different SecretStores if you have more than one provider. The other resource is the &lt;strong&gt;ExternalSecret&lt;/strong&gt; which is a blueprint for the secret which should be created. So in practice if you create an ExternalSecret a normal K8s-Secret follows. The ExternalSecret holds information about which entry in 1Password to fetch and in which way to populate the values in a Secret. (More later)&lt;/p&gt;
&lt;h2&gt;
  
  
  1Password Connect
&lt;/h2&gt;

&lt;p&gt;1Password-Connect is a kind-of Proxy which caches Secrets from the 1Password Cloud-Service, so you are able to retrieve secrets even 1Password is down, it also reduces the amount of requests to 1Password.&lt;br&gt;
Its needed for the External Secrets Operator to work.&lt;br&gt;
In my approach, it is right next to the External Secret Operator in the same namespace.&lt;/p&gt;

&lt;p&gt;All together the workflow looks similar to that.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;External Secret is created&lt;/li&gt;
&lt;li&gt;External Secret Operator uses the information's in SecretStore to fetch the needed password/login from 1Password Connect&lt;/li&gt;
&lt;li&gt;1Password connect itself fetch the needed password/login from 1Password directly&lt;/li&gt;
&lt;li&gt;External Secret Operator gets the password/login &amp;amp; creates the normal Kubernetes Secret with the requested values&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Don't worry if you don't understand the whole process, we'll do a full hands-on walk-through at the end.&lt;/p&gt;
&lt;h1&gt;
  
  
  Install needed Helm-Charts
&lt;/h1&gt;

&lt;p&gt;For the Installation of the helm-charts i will use argocd-apps, because i find this approach really portable. If i have a new cluster i just install argocd and then a &lt;code&gt;kubectl apply&lt;/code&gt; . and i have all my needed apps installed. Beyond that i can use kustomize or helm and i have a auto-update feature of the apps - if i want to. Just set the &lt;code&gt;targetRevision&lt;/code&gt; field to something like this &lt;code&gt;1.x.x&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;But you can also use helm or fluxcd.&lt;/p&gt;
&lt;h2&gt;
  
  
  Install 1Password-Connect
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Install the helm-chart
&lt;/h3&gt;

&lt;p&gt;I modified the default values of the official &lt;a href="https://github.com/1Password/connect-helm-charts/tree/main/charts/connect" rel="noopener noreferrer"&gt;Chart&lt;/a&gt; to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# values.yaml
connect:
  serviceType: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The default value is NodePort which is IMO dangerous if you publish your Secret-management-api to the local network or even the Internet. ClusterIP is suffient because the External Secrets Operator is in the cluster - not outside.&lt;/p&gt;

&lt;p&gt;To install the Chart with the predefined values you can just apply this argocd-app. (&lt;code&gt;kubectl apply -f app-1password-connect.yaml&lt;/code&gt;)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app-1password-connect.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: 1password-connect
  namespace: argocd 
spec:
  destination:
    namespace: external-secrets
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: https://1password.github.io/connect-helm-charts
    targetRevision: 1.x.x
    chart: connect
    helm:
      values: |
        connect:
          serviceType: ClusterIP
  syncPolicy:
    automated: {}
    syncOptions:
    - CreateNamespace=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that you should have a failed Pod in the external-secrets namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n external-secrets
onepassword-connect-5fcbd4c68b-fgx68                         0/2     CreateContainerConfigError   0             4d20h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The reason is &lt;code&gt;Error: secret "op-credentials" not found&lt;/code&gt; (&lt;code&gt;kubectl get event -n external-secrets&lt;/code&gt;). This Secret contains the token which is used for the connect-server to connect to 1Password.&lt;/p&gt;

&lt;p&gt;To create the Secret install the &lt;a href="https://1password.com/downloads/command-line/" rel="noopener noreferrer"&gt;1Password CLI&lt;/a&gt; and execute following commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a new Vault for the Secrets
$ op vault create "K8s"
# Create a connect server in 1Password
op connect server create "Kubernetes" --vaults "K8s"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The location of the credentials file can be seen in the output.&lt;/p&gt;

&lt;p&gt;With the 1password-credentials.json in place we can create the needed secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# The connect service expects the 1password-credentials.json in base64
$ kubectl create secret generic op-credentials -n external-secrets --from-literal=1password-credentials.json="$(cat /path/to/1password-credentials.json | base64)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a while the pod should be running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pod -n external-secrets
onepassword-connect-5fcbd4c68b-chmfg                         2/2     Running   0
  21m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations you completed the first step - installed the 1Password-Connect Server in Kubernetes!&lt;/p&gt;

&lt;h2&gt;
  
  
  Install the External Secret Operator
&lt;/h2&gt;

&lt;p&gt;For installation you can use again an argocd-app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: external-secrets-operator
  namespace: argocd 
spec:
  destination:
    namespace: external-secrets
    server: https://kubernetes.default.svc
  project: default
  source:
    repoURL: https://charts.external-secrets.io
    targetRevision: 0.x.x
    chart: external-secrets
  syncPolicy:
    automated: {}
    syncOptions:
    - CreateNamespace=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that your namespace should look like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n external-secrets
NAME                                                         READY   STATUS    RESTARTS      AGE
external-secrets-operator-5db95c68b8-j6pzj                   1/1     Running   1 (18d ago)   18d
external-secrets-operator-cert-controller-65f74dc7bf-784hg   1/1     Running   1 (18d ago)   18d
external-secrets-operator-webhook-5475bddd67-p9vq9           1/1     Running   0             18d
onepassword-connect-5fcbd4c68b-chmfg                         2/2     Running   0             31m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we can establish the connection to the connect server (höhö).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the token and save it in the OP_CONNECT_TOKEN environment variable 
export OP_CONNECT_TOKEN=$(op connect token create "external-secret-operator" --server "Kubernetes" --vault "K8s")
# Create secret with the token which is used by the External-Secret-Operator ClusterSecretStore
kubectl create secret -n external-secrets generic onepassword-connect-token --from-literal=token=$OP_CONNECT_TOKEN
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create the ClusterSecretStore with &lt;code&gt;kubectl apply -f&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
  name: k8s
spec:
  provider:
    onepassword:
      connectHost: http://onepassword-connect:8080
      vaults:
        K8s: 1  # look in this vault first
      auth:
        secretRef:
          connectTokenSecretRef:
            name: onepassword-connect-token
            key: token
            namespace: external-secrets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To see if it the connection was successful you can check the status of the ClusterSecretStore.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get -n external-secrets clustersecretstores.external-secrets.io k8s
NAME   AGE    STATUS   CAPABILITIES   READY
k8s    103m   Valid    ReadOnly       True
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Congratulations you also installed the external secret operator and established a connection with 1password connect.&lt;/p&gt;

&lt;h2&gt;
  
  
  End-to-End-Test the Solution
&lt;/h2&gt;

&lt;p&gt;Theoretically you should be able to retrieve secrets from 1Password - but let us test it with a pragmatical example.&lt;/p&gt;

&lt;p&gt;First let us create a secret in 1Password with the 1Password CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This creates a new entry named "Scaleway Credentails" with two properties/fields. 
op item create --vault="K8s" --title="Scaleway Credentials" --category="login" accessKeyId="token-xyz" secretKey="xyz"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the next step we can create our ExternalSecret which then creates our normal Kubernetes Secret.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
  # name of the ExternalSecret &amp;amp; Secret which gets created
  name: scaleway-credentials
spec:
  secretStoreRef:
    kind: ClusterSecretStore
    name: k8s
  target:
    creationPolicy: Owner
  data:
  - secretKey: accessKeyId
    remoteRef:
      # 1password-entry-name
      key: "Scaleway Credentials"
      # 1password-field
      property: accessKeyId
  - secretKey: secretKey
    remoteRef:
      # 1password-entry-name
      key: "Scaleway Credentials"
      # 1password-field
      property: secretKey
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tried my best to explain the references with comments.&lt;br&gt;
To summarize the name of the ExternalSecret do not count. But its important that the secretKey is and the property field has the same value. The secretKey and property field represent the "Field" in 1Password e.g. password or user. The key field represents the entry in 1Password e.g. dev.to.&lt;/p&gt;

&lt;p&gt;You can check the status of the external secret and also if the normal K8s secret was created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get externalsecret -n default
NAME                   STORE   REFRESH INTERVAL   STATUS         READY
scaleway-credentials   k8s     1h                 SecretSynced   True
$kubectl get secret -n default
NAME                   TYPE     DATA   AGE
scaleway-credentials   Opaque   2      7m13s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see the secret is created. You can also have a deeper look into the values.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get secret -n default -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    accessKeyId: dG9rZW4teHl6
    secretKey: eHl6
  immutable: false
  kind: Secret
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"external-secrets.io/v1beta1","kind":"ExternalSecret","metadata":{"annotations":{},"name":"scaleway-credentials","namespace":"default"},"spec":{"data":[{"remoteRef":{"key":"Scaleway Credentials","property":"accessKeyId"},"secretKey":"accessKeyId"},{"remoteRef":{"key":"Scaleway Credentials","property":"secretKey"},"secretKey":"secretKey"}],"secretStoreRef":{"kind":"ClusterSecretStore","name":"k8s"},"target":{"creationPolicy":"Owner"}}}
      reconcile.external-secrets.io/data-hash: f3d79dd2ad3055723d0bebe3481f2372
    creationTimestamp: "2024-07-29T15:36:04Z"
    labels:
      reconcile.external-secrets.io/created-by: c66895ad2c04350c73c512c0175cee24
    name: scaleway-credentials
    namespace: default
    ownerReferences:
    - apiVersion: external-secrets.io/v1
      blockOwnerDeletion: true
      controller: true
      kind: ExternalSecret
      name: scaleway-credentials
      uid: 1df5f0f5-ec44-4cb7-859a-3d3ef60b1d45
    resourceVersion: "689603"
    uid: 9543092a-7bdc-4695-9c83-bc35c36188c0
  type: Opaque
kind: List
metadata:
  resourceVersion: ""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Under &lt;code&gt;data&lt;/code&gt; you can see the both keys and the corresponding values encoded in base64.&lt;/p&gt;

&lt;p&gt;Congratulations you successfully created a secret in 1Password and pulled it into a normal Kubernetes Secret which can now be used as usual in deployments and other resources. 🥳🥳🥳&lt;/p&gt;

&lt;p&gt;Thanks for reading my first post!&lt;br&gt;
I hope you had fun and that it helped you. I'd love to hear your thoughts or feedback, so feel free to leave a comment below.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>security</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
