<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Boriss V</title>
    <description>The latest articles on DEV Community by Boriss V (@bnovickovs).</description>
    <link>https://dev.to/bnovickovs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bnovickovs"/>
    <language>en</language>
    <item>
      <title>Running Talos Linux with GPU Passthrough on QEMU</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Fri, 12 Sep 2025 06:01:10 +0000</pubDate>
      <link>https://dev.to/bnovickovs/running-talos-linux-with-gpu-passthrough-on-qemu-1ec6</link>
      <guid>https://dev.to/bnovickovs/running-talos-linux-with-gpu-passthrough-on-qemu-1ec6</guid>
      <description>&lt;p&gt;I’ve been spending a lot of time with Talos Linux&lt;br&gt;
 lately. It’s awesome for running Kubernetes in a minimal, immutable way. But there’s always that moment when you go:&lt;/p&gt;

&lt;p&gt;Talos is lightweight, QEMU is great for homelabs, but GPU passthrough is where things get a bit hacky. So I decided to make it work — and I put the results in a repo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/kubebn/talos-qemu-gpu-passthrough" rel="noopener noreferrer"&gt;https://github.com/kubebn/talos-qemu-gpu-passthrough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Running Talos on QEMU is easy enough. Spin up a VM, boot the ISO, apply a machine config — done. But once I needed NVIDIA GPU support for workloads (think AI/ML testing or just experimenting with device plugins).&lt;/p&gt;

&lt;p&gt;Here's the thing about GPU passthrough - you can't just point QEMU at your GPU and hope for the best. The host OS needs to completely release control of the GPU to the VFIO subsystem. This means:&lt;/p&gt;

&lt;p&gt;Your GPU disappears from the host (no more nvidia-smi on that card)&lt;br&gt;
VFIO takes ownership via kernel parameters&lt;br&gt;
The VM gets exclusive access to the hardware&lt;/p&gt;

&lt;p&gt;It's like lending your car to a friend - they get all the control, you get none.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OVMF and UEFI Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Modern GPUs are picky about firmware. The scripts detect if your system supports OVMF (UEFI for VMs) and configure things appropriately. This is especially important for newer cards that expect UEFI environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is homelab territory, so expect some rough edges. If you’ve ever wrestled with PCI passthrough, you know it’s always a bit finicky.&lt;/p&gt;

&lt;p&gt;But hey, I got it working, and now my Talos VMs can see GPUs just fine. Hopefully it helps someone else avoid a weekend of trial-and-error.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Comparing Cilium Networking Setups on a Talos Hybrid Kubernetes Cluster</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Thu, 11 Sep 2025 10:59:58 +0000</pubDate>
      <link>https://dev.to/bnovickovs/comparing-cilium-networking-setups-on-a-talos-hybrid-kubernetes-cluster-3gdk</link>
      <guid>https://dev.to/bnovickovs/comparing-cilium-networking-setups-on-a-talos-hybrid-kubernetes-cluster-3gdk</guid>
      <description>&lt;p&gt;Recently, I’ve been experimenting with different networking configurations for a Talos Linux Kubernetes cluster deployed in hybrid mode - with control plane nodes running in AWS and a worker node hosted on-premises in QEMU. My goal was to evaluate how Cilium CNI behaves in such a setup, especially when combined with KubeSpan, Talos’ native WireGuard-based mesh networking layer.&lt;/p&gt;

&lt;p&gt;In this post, I’ll share my findings from three different setups, highlighting the challenges, performance results, and takeaways for hybrid environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Cilium Native WireGuard with KubeSpan Disabled&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My first experiment was running Cilium’s native WireGuard encryption while disabling KubeSpan.&lt;/p&gt;

&lt;p&gt;On paper, this should provide secure pod-to-pod communication. In practice, it failed. The reason lies in how Cilium implements WireGuard - it assumes direct IP connectivity between nodes.&lt;/p&gt;

&lt;p&gt;In my hybrid setup, the on-prem worker lives behind NAT, which makes it unreachable for AWS nodes. Since Cilium does not support NAT traversal techniques (e.g., hole punching or STUN-like mechanisms), the WireGuard handshake could not be established.&lt;/p&gt;

&lt;p&gt;This is exactly where KubeSpan shines. Unlike Cilium’s implementation, KubeSpan was designed for hybrid, cloud, and NAT-constrained topologies. It automatically builds WireGuard tunnels across boundaries, enabling connectivity even when nodes are hidden behind NAT.&lt;/p&gt;

&lt;p&gt;Takeaway: Without KubeSpan, Cilium WireGuard isn’t viable in hybrid deployments with NAT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Native Routing to Reduce VXLAN Encapsulation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The second setup explored Cilium’s native routing as a way to reduce VXLAN encapsulation overhead. VXLAN is fine in co-located clusters, but it adds overhead, especially in cross-node, hybrid traffic.&lt;/p&gt;

&lt;p&gt;At first, I assumed native routing wouldn’t work outside of tightly connected environments. However, with a few tweaks it became possible:&lt;/p&gt;

&lt;p&gt;Deploy a DaemonSet that extracts each node’s cilium_host IP.&lt;/p&gt;

&lt;p&gt;Assign a secondary IP with a wider subnet mask (e.g., /24).&lt;/p&gt;

&lt;p&gt;Enable advertiseKubernetesNetworks so that pod CIDRs are shared across nodes.&lt;/p&gt;

&lt;p&gt;Ensure KubeSpan peers include these CIDRs in their AllowedIPs.&lt;/p&gt;

&lt;p&gt;This workaround allowed Cilium to operate in native routing mode, bypassing VXLAN encapsulation even in the hybrid cluster.&lt;/p&gt;

&lt;p&gt;Takeaway: With some custom plumbing, native routing works across NAT-boundaries when combined with KubeSpan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Test Results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I ran a series of TCP/UDP performance benchmarks. The full dataset is available here&lt;/p&gt;

&lt;p&gt;but here’s the summary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Stream Tests: These tests were instrumental &lt;span class="k"&gt;in &lt;/span&gt;evaluating the throughput performance of pods and nodes.
RR tests &lt;span class="o"&gt;(&lt;/span&gt;Request-Response&lt;span class="o"&gt;)&lt;/span&gt;: These tests allowed us to assess the packet per second and latency performance of pods and nodes.
CRR tests &lt;span class="o"&gt;(&lt;/span&gt;Connect-Request-Response&lt;span class="o"&gt;)&lt;/span&gt;: By utilizing this scenario, we could evaluate the New Connection Per Second performance of pods and nodes.


&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
KubeSpan Enabled. Cilium settings:

k8sServiceHost: localhost
k8sServicePort: 7445

kubeProxyReplacement: &lt;span class="nb"&gt;true
&lt;/span&gt;enableK8sEndpointSlice: &lt;span class="nb"&gt;true
&lt;/span&gt;localRedirectPolicy: &lt;span class="nb"&gt;true
&lt;/span&gt;healthChecking: &lt;span class="nb"&gt;true

&lt;/span&gt;bpf:
    masquerade: &lt;span class="nb"&gt;true
&lt;/span&gt;ipv4:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hostServices:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hostPort:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;nodePort:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;externalIPs:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hostFirewall:
    enabled: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;

cilium connectivity perf &lt;span class="nt"&gt;--tolerations&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nt"&gt;--namespace-labels&lt;/span&gt; pod-security.kubernetes.io/enforce&lt;span class="o"&gt;=&lt;/span&gt;privileged &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--helm-release-name&lt;/span&gt; cilium &lt;span class="nt"&gt;--udp&lt;/span&gt; &lt;span class="nt"&gt;--crr&lt;/span&gt; &lt;span class="nt"&gt;--samples&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-selector-client&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes.io/hostname=io-apps-bootstrap-1"&lt;/span&gt; &lt;span class="nt"&gt;--node-selector-server&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes.io/hostname=io-gpu-pruuzglzan18m9y8"&lt;/span&gt;


🔥 Network Performance Test Summary - NON COLOCATED NODES &lt;span class="o"&gt;(&lt;/span&gt;AWS-&amp;gt;ONPREM&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test            | Duration        | Min             | Mean            | Max             | P50             | P90             | P99             | Transaction rate OP/s
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 57µs            | 67.26µs         | 271µs           | 65µs            | 73µs            | 101µs           | 14822.78
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 23µs            | 33.84µs         | 201µs           | 33µs            | 36µs            | 49µs            | 29445.81
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 21µs            | 33.42µs         | 19.48ms         | 33µs            | 37µs            | 50µs            | 29795.95
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 67µs            | 87.43µs         | 349µs           | 86µs            | 99µs            | 128µs           | 11405.75
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 25µs            | 36.54µs         | 227µs           | 36µs            | 39µs            | 52µs            | 27270.72
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 21µs            | 30.35µs         | 215µs           | 30µs            | 31µs            | 46µs            | 32813.49
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 287.284ms       | 288.18447ms     | 294.926ms       | 285.312ms       | 289.375ms       | 295ms           | 3.37
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 143.513ms       | 146.13599ms     | 287.71ms        | 145.074ms       | 149.104ms       | 150ms           | 6.75
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 143.883ms       | 144.18403ms     | 144.613ms       | 144.927ms       | 148.985ms       | 149.855ms       | 6.90
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 287.289ms       | 287.86662ms     | 289.423ms       | 285ms           | 288.823ms       | 289.705ms       | 3.38
📋 host-to-host    | other-node | TCP_RR          | 10s             | 143.572ms       | 146.06574ms     | 288.202ms       | 145.074ms       | 149.104ms       | 150ms           | 6.75
📋 host-to-host    | other-node | UDP_RR          | 10s             | 143.486ms       | 144.03764ms     | 146.835ms       | 144.927ms       | 148.985ms       | 149.855ms       | 6.90
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 54µs            | 66.73µs         | 306µs           | 65µs            | 73µs            | 97µs            | 14935.31
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 23µs            | 33.89µs         | 177µs           | 33µs            | 36µs            | 50µs            | 29391.89
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 21µs            | 33.05µs         | 195µs           | 32µs            | 35µs            | 49µs            | 30139.94
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 68µs            | 85.73µs         | 366µs           | 86µs            | 92µs            | 118µs           | 11630.99
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 24µs            | 36.82µs         | 5.879ms         | 36µs            | 39µs            | 53µs            | 27060.31
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 22µs            | 31.34µs         | 21.676ms        | 30µs            | 38µs            | 48µs            | 31787.03
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 287.141ms       | 287.72271ms     | 288.986ms       | 285ms           | 288.823ms       | 289.705ms       | 3.38
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 143.716ms       | 146.31566ms     | 287.914ms       | 145.074ms       | 149.104ms       | 150ms           | 6.74
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 143.48ms        | 144.11599ms     | 149.036ms       | 144.927ms       | 148.985ms       | 149.855ms       | 6.90
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 287.004ms       | 287.91197ms     | 292.78ms        | 285.312ms       | 289.375ms       | 295ms           | 3.37
📋 host-to-host    | other-node | TCP_RR          | 10s             | 145.089ms       | 147.76345ms     | 290.773ms       | 145ms           | 149.09ms        | 150ms           | 6.67
📋 host-to-host    | other-node | UDP_RR          | 10s             | 145.184ms       | 145.58032ms     | 151.39ms        | 145.074ms       | 149.104ms       | 150ms           | 6.80
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 56µs            | 68.28µs         | 284µs           | 65µs            | 82µs            | 106µs           | 14600.08
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 22µs            | 34.21µs         | 236µs           | 33µs            | 37µs            | 50µs            | 29128.28
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 20µs            | 32.74µs         | 209µs           | 32µs            | 35µs            | 48µs            | 30413.30
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 67µs            | 85.69µs         | 367µs           | 85µs            | 92µs            | 119µs           | 11638.88
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 23µs            | 36.68µs         | 208µs           | 36µs            | 39µs            | 53µs            | 27172.15
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 20µs            | 30.66µs         | 3.662ms         | 30µs            | 32µs            | 46µs            | 32483.28
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 290.464ms       | 291.16426ms     | 292.808ms       | 295ms           | 298.823ms       | 299.705ms       | 3.40
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 145.063ms       | 148.08606ms     | 295.874ms       | 145ms           | 149.09ms        | 150ms           | 6.66
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 144.919ms       | 145.82088ms     | 149.155ms       | 145ms           | 148.97ms        | 149.852ms       | 6.80
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 287.323ms       | 287.86053ms     | 289.85ms        | 285ms           | 288.823ms       | 289.705ms       | 3.37
📋 host-to-host    | other-node | TCP_RR          | 10s             | 143.508ms       | 146.03766ms     | 288.42ms        | 145.074ms       | 149.104ms       | 150ms           | 6.75
📋 host-to-host    | other-node | UDP_RR          | 10s             | 143.523ms       | 143.91538ms     | 145.713ms       | 144.927ms       | 148.985ms       | 149.855ms       | 6.90
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test               | Duration        | Throughput Mb/s
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 12950.39
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 2092.67
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 48431.66
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 7942.53
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 20550.47
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 1442.80
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 79811.91
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 5397.83
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 105.28
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 351.63
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 380.95
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 496.31
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 104.16
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 399.63
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 430.81
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 536.22
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 12556.70
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 2071.79
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 48310.76
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 7941.07
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 20379.97
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 1410.00
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 79800.63
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 5430.72
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 103.53
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 325.66
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 413.59
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 480.75
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 119.24
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 413.51
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 383.96
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 437.77
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 12784.47
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 2079.99
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 49080.75
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 7867.70
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 20745.23
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 1432.03
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 79912.94
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 5489.68
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 101.82
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 329.69
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 407.10
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 470.26
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 107.29
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 404.08
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 405.85
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 567.32
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;

cilium connectivity perf &lt;span class="nt"&gt;--tolerations&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nt"&gt;--namespace-labels&lt;/span&gt; pod-security.kubernetes.io/enforce&lt;span class="o"&gt;=&lt;/span&gt;privileged &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--helm-release-name&lt;/span&gt; cilium &lt;span class="nt"&gt;--udp&lt;/span&gt; &lt;span class="nt"&gt;--crr&lt;/span&gt; &lt;span class="nt"&gt;--samples&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-selector-client&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes.io/hostname=io-apps-bootstrap-1"&lt;/span&gt; &lt;span class="nt"&gt;--node-selector-server&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes.io/hostname=io-controlplane-1"&lt;/span&gt;


🔥 Network Performance Test Summary - COLOCATED NODES &lt;span class="o"&gt;(&lt;/span&gt;AWS-&amp;gt;AWS&lt;span class="o"&gt;)&lt;/span&gt;:

&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test            | Duration        | Min             | Mean            | Max             | P50             | P90             | P99             | Transaction rate OP/s
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 93µs            | 153.92µs        | 11.063ms        | 136µs           | 203µs           | 442µs           | 6473.15
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 31µs            | 50.49µs         | 29.252ms        | 46µs            | 62µs            | 121µs           | 19644.46
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 30µs            | 49.49µs         | 6.326ms         | 48µs            | 63µs            | 123µs           | 20042.63
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 99µs            | 157.13µs        | 11.857ms        | 139µs           | 203µs           | 437µs           | 6340.76
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 34µs            | 51.88µs         | 6.772ms         | 49µs            | 65µs            | 124µs           | 19121.01
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 33µs            | 51.49µs         | 6.29ms          | 47µs            | 64µs            | 128µs           | 19250.97
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 805µs           | 1.42404ms       | 1.031978s       | 1.064ms         | 1.333ms         | 2.533ms         | 701.76
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 305µs           | 509.3µs         | 17.769ms        | 449µs           | 608µs           | 1.67ms          | 1961.27
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 296µs           | 486.33µs        | 25.148ms        | 433µs           | 565µs           | 1.495ms         | 2053.80
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 717µs           | 1.11209ms       | 19.207ms        | 997µs           | 1.348ms         | 3.325ms         | 898.50
📋 host-to-host    | other-node | TCP_RR          | 10s             | 273µs           | 441.12µs        | 10.452ms        | 406µs           | 514µs           | 1.147ms         | 2264.47
📋 host-to-host    | other-node | UDP_RR          | 10s             | 270µs           | 427.96µs        | 16.015ms        | 394µs           | 504µs           | 1.031ms         | 2333.99
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 94µs            | 150.7µs         | 11.579ms        | 134µs           | 198µs           | 417µs           | 6610.99
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 31µs            | 48.94µs         | 13.847ms        | 46µs            | 61µs            | 123µs           | 20268.63
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 30µs            | 49.32µs         | 16.98ms         | 47µs            | 63µs            | 139µs           | 20120.57
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 101µs           | 158.66µs        | 20.761ms        | 143µs           | 205µs           | 438µs           | 6280.68
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 31µs            | 52.56µs         | 32.051ms        | 49µs            | 65µs            | 124µs           | 18856.14
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 30µs            | 52.06µs         | 18.874ms        | 47µs            | 63µs            | 132µs           | 19055.07
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 778µs           | 1.14343ms       | 14.77ms         | 1.065ms         | 1.385ms         | 2.584ms         | 873.62
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 300µs           | 473.68µs        | 18.28ms         | 439µs           | 559µs           | 1.154ms         | 2108.58
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 305µs           | 468.16µs        | 12.304ms        | 431µs           | 550µs           | 1.155ms         | 2133.48
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 679µs           | 1.01739ms       | 20.003ms        | 948µs           | 1.203ms         | 2.3ms           | 982.19
📋 host-to-host    | other-node | TCP_RR          | 10s             | 290µs           | 451.32µs        | 8.471ms         | 400µs           | 529µs           | 1.5ms           | 2213.05
📋 host-to-host    | other-node | UDP_RR          | 10s             | 275µs           | 438.89µs        | 48.143ms        | 392µs           | 512µs           | 1.178ms         | 2275.78
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 91µs            | 154.91µs        | 9.155ms         | 136µs           | 206µs           | 463µs           | 6433.15
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 31µs            | 49.53µs         | 11.302ms        | 46µs            | 62µs            | 119µs           | 20018.91
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 30µs            | 50.66µs         | 6.199ms         | 48µs            | 64µs            | 127µs           | 19585.08
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 97µs            | 161.53µs        | 19.182ms        | 140µs           | 199µs           | 432µs           | 6169.06
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 33µs            | 51.89µs         | 9.37ms          | 49µs            | 64µs            | 120µs           | 19116.27
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 33µs            | 51.96µs         | 22.041ms        | 47µs            | 65µs            | 133µs           | 19078.59
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 785µs           | 1.17809ms       | 14.512ms        | 1.088ms         | 1.45ms          | 2.93ms          | 848.22
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 290µs           | 491.34µs        | 23.311ms        | 445µs           | 574µs           | 1.358ms         | 2032.95
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 316µs           | 530.1µs         | 41.495ms        | 455µs           | 606µs           | 1.642ms         | 1884.38
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 714µs           | 1.06433ms       | 31.965ms        | 963µs           | 1.266ms         | 2.766ms         | 938.79
📋 host-to-host    | other-node | TCP_RR          | 10s             | 279µs           | 443.92µs        | 18.193ms        | 402µs           | 516µs           | 1.246ms         | 2249.97
📋 host-to-host    | other-node | UDP_RR          | 10s             | 268µs           | 420.99µs        | 24.753ms        | 389µs           | 495µs           | 1.05ms          | 2372.48
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test               | Duration        | Throughput Mb/s
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 7450.22
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 845.16
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 11222.14
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 849.59
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 18252.45
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 775.24
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 23872.71
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 871.62
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 1474.15
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 322.32
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 1547.00
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 472.29
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 1844.26
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 394.33
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 1816.71
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 546.08
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 8057.11
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 717.48
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 11106.95
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 754.54
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 17977.78
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 756.63
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 24229.15
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 843.12
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 1641.48
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 349.99
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 1595.12
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 463.93
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 1769.24
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 410.02
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 1833.89
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 574.86
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 7852.78
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 739.25
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 11195.04
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 870.62
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 18268.06
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 783.99
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 23901.10
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 751.55
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 1507.56
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 328.49
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 1498.47
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 442.88
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 1746.56
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 431.39
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 1821.09
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 574.39
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;



&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
KubeSpan Enabled. Native Routing enabled.


k8sServiceHost: localhost
k8sServicePort: 7445

kubeProxyReplacement: &lt;span class="nb"&gt;true
&lt;/span&gt;enableK8sEndpointSlice: &lt;span class="nb"&gt;true
&lt;/span&gt;localRedirectPolicy: &lt;span class="nb"&gt;true
&lt;/span&gt;healthChecking: &lt;span class="nb"&gt;true
&lt;/span&gt;routingMode: native
ipv4NativeRoutingCIDR: &lt;span class="s2"&gt;"10.244.0.0/16"&lt;/span&gt;

bpf:
    masquerade: &lt;span class="nb"&gt;true
    &lt;/span&gt;hostLegacyRouting: &lt;span class="nb"&gt;true
&lt;/span&gt;ipv4:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hostServices:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hostPort:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;nodePort:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;externalIPs:
    enabled: &lt;span class="nb"&gt;true
&lt;/span&gt;hostFirewall:
    enabled: &lt;span class="nb"&gt;true

&lt;/span&gt;Hack to add pod CIDR to kubespan with advertiseKubernetesNetworks: &lt;span class="nb"&gt;true&lt;/span&gt;:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cilium-host-node-cidr
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: cilium-host-node-cidr
  template:
    metadata:
      name: cilium-host-node-cidr
      labels:
        app: cilium-host-node-cidr
    spec:
      hostNetwork: &lt;span class="nb"&gt;true
      &lt;/span&gt;tolerations:
      - key: &lt;span class="s2"&gt;"node-role.kubernetes.io/master"&lt;/span&gt;
        operator: Exists
      - key: &lt;span class="s2"&gt;"node-role.kubernetes.io/control-plane"&lt;/span&gt;
        operator: Exists
      containers:
      - name: cilium-host-node-cidr
        image: alpine
        imagePullPolicy: Always
        &lt;span class="nb"&gt;command&lt;/span&gt;:
        - /bin/sh
        - &lt;span class="nt"&gt;-c&lt;/span&gt;
        - |
          apk update
          apk add iproute2

          handle_error&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SLEEP_TIME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="o"&gt;}&lt;/span&gt;

          &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Watching cilium_host IP addresses..."&lt;/span&gt;

          &lt;span class="k"&gt;while&lt;/span&gt; :&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
            &lt;span class="c"&gt;# Extract all IPv4 addresses from cilium_host&lt;/span&gt;
            &lt;span class="nv"&gt;ip_addresses&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;ip &lt;span class="nt"&gt;-4&lt;/span&gt; addr show dev cilium_host |grep inet | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $2}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

            &lt;span class="c"&gt;# Check if any of the IP addresses match the NODE_CIDR_MASK_SIZE&lt;/span&gt;
            &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_addresses&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s2"&gt;"/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_CIDR_MASK_SIZE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

              &lt;span class="c"&gt;# Extract the /32 IP address if NODE_CIDR_MASK_SIZE was not found&lt;/span&gt;
              &lt;span class="nv"&gt;pod_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$ip_addresses&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"/32"&lt;/span&gt; | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;/ &lt;span class="nt"&gt;-f1&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

              &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$pod_ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
                &lt;/span&gt;handle_error &lt;span class="s2"&gt;"Couldn't extract cilium pod IP address from cilium_host interface"&lt;/span&gt;
                &lt;span class="k"&gt;continue
              fi&lt;/span&gt;

              &lt;span class="c"&gt;# Add secondary IP address with the proper NODE_CIDR_MASK_SIZE&lt;/span&gt;
              &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"cilium_host IP is &lt;/span&gt;&lt;span class="nv"&gt;$pod_ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
              ip addr add &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;pod_ip&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_CIDR_MASK_SIZE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; dev cilium_host

              &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Added new cilium_host IP address with mask /&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NODE_CIDR_MASK_SIZE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
              ip addr show dev cilium_host
            &lt;span class="o"&gt;}&lt;/span&gt;

            &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SLEEP_TIME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
          &lt;span class="k"&gt;done
        &lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt;:
        &lt;span class="c"&gt;# The node cidr mask size (IPv4) to allocate pod IPs&lt;/span&gt;
        - name: NODE_CIDR_MASK_SIZE
          value: &lt;span class="s2"&gt;"24"&lt;/span&gt;
        - name: SLEEP_TIME
          value: &lt;span class="s2"&gt;"30"&lt;/span&gt;
        securityContext:
          capabilities:
            add: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"NET_ADMIN"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;

cilium connectivity perf &lt;span class="nt"&gt;--tolerations&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="nt"&gt;--namespace-labels&lt;/span&gt; pod-security.kubernetes.io/enforce&lt;span class="o"&gt;=&lt;/span&gt;privileged &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--helm-release-name&lt;/span&gt; cilium &lt;span class="nt"&gt;--udp&lt;/span&gt; &lt;span class="nt"&gt;--crr&lt;/span&gt; &lt;span class="nt"&gt;--samples&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-selector-client&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes.io/hostname=io-apps-bootstrap-1"&lt;/span&gt; &lt;span class="nt"&gt;--node-selector-server&lt;/span&gt; &lt;span class="s2"&gt;"kubernetes.io/hostname=io-gpu-8tv12b3n8mss73a5"&lt;/span&gt;

🔥 Network Performance Test Summary - NON COLOCATED NODES &lt;span class="o"&gt;(&lt;/span&gt;AWS-&amp;gt;ONPREM&lt;span class="o"&gt;)&lt;/span&gt;:

&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test            | Duration        | Min             | Mean            | Max             | P50             | P90             | P99             | Transaction rate OP/s
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 58µs            | 67.04µs         | 306µs           | 65µs            | 73µs            | 98µs            | 14866.02
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 17µs            | 33.9µs          | 544µs           | 33µs            | 37µs            | 50µs            | 29380.88
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 18µs            | 32.77µs         | 181µs           | 32µs            | 35µs            | 48µs            | 30398.39
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 67µs            | 85.55µs         | 381µs           | 85µs            | 93µs            | 121µs           | 11661.15
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 23µs            | 36.35µs         | 618µs           | 36µs            | 40µs            | 53µs            | 27401.58
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 21µs            | 30.9µs          | 210µs           | 30µs            | 32µs            | 46µs            | 32233.90
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 264.6ms         | 265.32654ms     | 266.027ms       | 264.864ms       | 268.918ms       | 269.729ms       | 3.67
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 132.279ms       | 134.5613ms      | 264.645ms       | 135.068ms       | 139.041ms       | 140ms           | 7.33
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 132.226ms       | 132.69796ms     | 133.179ms       | 134.933ms       | 138.933ms       | 139.866ms       | 7.50
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 264.521ms       | 265.27635ms     | 269.059ms       | 264.864ms       | 268.918ms       | 269.729ms       | 3.67
📋 host-to-host    | other-node | TCP_RR          | 10s             | 132.466ms       | 134.62891ms     | 264.991ms       | 135.068ms       | 139.041ms       | 140ms           | 7.33
📋 host-to-host    | other-node | UDP_RR          | 10s             | 132.281ms       | 132.93875ms     | 141.145ms       | 135ms           | 139.054ms       | 140ms           | 7.50
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 58µs            | 67.35µs         | 3.463ms         | 65µs            | 73µs            | 101µs           | 14799.03
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 19µs            | 33.57µs         | 257µs           | 33µs            | 36µs            | 49µs            | 29675.29
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 18µs            | 32.75µs         | 39.239ms        | 32µs            | 35µs            | 48µs            | 30418.80
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 67µs            | 87.48µs         | 29.116ms        | 86µs            | 97µs            | 128µs           | 11402.00
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 22µs            | 36.17µs         | 215µs           | 36µs            | 39µs            | 52µs            | 27552.41
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 22µs            | 31.26µs         | 192µs           | 30µs            | 35µs            | 47µs            | 31862.74
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 264.337ms       | 265.24054ms     | 265.849ms       | 264.864ms       | 268.918ms       | 269.729ms       | 3.67
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 132.212ms       | 134.49454ms     | 265.236ms       | 135.068ms       | 139.041ms       | 140ms           | 7.34
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 132.263ms       | 132.72049ms     | 134.5ms         | 134.933ms       | 138.933ms       | 139.866ms       | 7.50
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 264.701ms       | 265.3847ms      | 266.411ms       | 264.864ms       | 268.918ms       | 269.729ms       | 3.67
📋 host-to-host    | other-node | TCP_RR          | 10s             | 132.197ms       | 134.60853ms     | 265.78ms        | 135.068ms       | 139.041ms       | 140ms           | 7.33
📋 host-to-host    | other-node | UDP_RR          | 10s             | 132.154ms       | 132.91133ms     | 148.356ms       | 135ms           | 139.054ms       | 140ms           | 7.50
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 59µs            | 69.47µs         | 311µs           | 66µs            | 80µs            | 118µs           | 14344.16
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 19µs            | 33.46µs         | 229µs           | 33µs            | 36µs            | 48µs            | 29773.55
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 20µs            | 32.65µs         | 214µs           | 32µs            | 35µs            | 48µs            | 30515.23
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 68µs            | 85.89µs         | 537µs           | 85µs            | 93µs            | 123µs           | 11611.06
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 25µs            | 36.42µs         | 192µs           | 36µs            | 39µs            | 52µs            | 27371.92
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 22µs            | 30.92µs         | 297µs           | 30µs            | 33µs            | 46µs            | 32213.09
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 265.235ms       | 266.09854ms     | 269.077ms       | 264.864ms       | 268.918ms       | 269.729ms       | 3.66
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 132.446ms       | 134.7497ms      | 265.69ms        | 135.068ms       | 139.041ms       | 140ms           | 7.32
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 132.496ms       | 133.04328ms     | 134.042ms       | 134.933ms       | 138.933ms       | 139.866ms       | 7.50
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 265.256ms       | 265.82441ms     | 268.172ms       | 264.864ms       | 268.918ms       | 269.729ms       | 3.66
📋 host-to-host    | other-node | TCP_RR          | 10s             | 132.648ms       | 134.87827ms     | 265.483ms       | 135.068ms       | 139.041ms       | 140ms           | 7.32
📋 host-to-host    | other-node | UDP_RR          | 10s             | 132.486ms       | 133.12277ms     | 139.521ms       | 134.933ms       | 138.933ms       | 139.866ms       | 7.50
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test               | Duration        | Throughput Mb/s
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 12374.78
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 1922.25
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 46342.63
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 7505.86
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 20529.12
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 1393.09
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 79555.81
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 5320.73
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 130.48
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 403.64
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 462.46
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 522.04
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 117.54
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 423.49
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 491.17
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 548.12
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 12309.88
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 1926.65
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 46042.69
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 7527.96
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 20760.76
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 1373.03
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 79836.82
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 5281.56
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 127.08
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 376.28
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 297.87
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 525.94
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 119.80
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 431.03
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 458.48
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 560.72
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 12186.17
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 1931.67
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 45567.46
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 7430.05
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 20255.60
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 1385.20
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 79924.62
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 5241.39
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 117.64
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 376.95
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 488.13
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 542.54
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 117.60
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 408.21
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 471.71
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 583.24
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;



🔥 Network Performance Test Summary - COLOCATED NODES &lt;span class="o"&gt;(&lt;/span&gt;AWS-&amp;gt;AWS&lt;span class="o"&gt;)&lt;/span&gt;:

&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test            | Duration        | Min             | Mean            | Max             | P50             | P90             | P99             | Transaction rate OP/s
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 94µs            | 151.37µs        | 16.201ms        | 134µs           | 195µs           | 406µs           | 6581.04
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 31µs            | 50.48µs         | 6.289ms         | 48µs            | 63µs            | 130µs           | 19649.36
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 30µs            | 49.74µs         | 5.088ms         | 47µs            | 63µs            | 127µs           | 19931.64
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 101µs           | 160.46µs        | 5.724ms         | 145µs           | 206µs           | 455µs           | 6210.43
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 32µs            | 49.27µs         | 21.005ms        | 44µs            | 62µs            | 121µs           | 20122.15
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 31µs            | 49.06µs         | 4.848ms         | 45µs            | 62µs            | 122µs           | 20200.67
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 762µs           | 1.15455ms       | 18.837ms        | 1.057ms         | 1.413ms         | 2.842ms         | 865.36
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 305µs           | 466.07µs        | 7.793ms         | 435µs           | 536µs           | 1.211ms         | 2142.90
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 299µs           | 451.41µs        | 16.472ms        | 418µs           | 526µs           | 1.162ms         | 2212.38
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 714µs           | 1.11194ms       | 8.506ms         | 996µs           | 1.361ms         | 3.5ms           | 898.60
📋 host-to-host    | other-node | TCP_RR          | 10s             | 295µs           | 461.86µs        | 7.879ms         | 416µs           | 538µs           | 1.471ms         | 2162.46
📋 host-to-host    | other-node | UDP_RR          | 10s             | 292µs           | 430.47µs        | 12.183ms        | 400µs           | 501µs           | 1.091ms         | 2320.17
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 95µs            | 158.23µs        | 12.025ms        | 135µs           | 202µs           | 470µs           | 6298.03
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 31µs            | 50.67µs         | 4.959ms         | 49µs            | 64µs            | 118µs           | 19571.78
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 30µs            | 48.64µs         | 5.806ms         | 47µs            | 62µs            | 116µs           | 20379.56
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 107µs           | 162.46µs        | 13.512ms        | 143µs           | 203µs           | 543µs           | 6134.84
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 31µs            | 49.95µs         | 9.565ms         | 46µs            | 64µs            | 116µs           | 19844.92
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 32µs            | 53.8µs          | 9.315ms         | 50µs            | 68µs            | 140µs           | 18435.33
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 779µs           | 1.10845ms       | 7.736ms         | 1.032ms         | 1.328ms         | 2.68ms          | 901.38
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 308µs           | 453.37µs        | 6.071ms         | 419µs           | 526µs           | 1.196ms         | 2203.14
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 303µs           | 473.46µs        | 19.026ms        | 419µs           | 543µs           | 1.704ms         | 2109.48
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 739µs           | 1.08138ms       | 14.969ms        | 993µs           | 1.326ms         | 2.566ms         | 923.96
📋 host-to-host    | other-node | TCP_RR          | 10s             | 293µs           | 429.17µs        | 11.309ms        | 396µs           | 491µs           | 1.155ms         | 2327.37
📋 host-to-host    | other-node | UDP_RR          | 10s             | 288µs           | 429.47µs        | 8.627ms         | 398µs           | 493µs           | 1.088ms         | 2325.36
📋 pod-to-pod      | same-node  | TCP_CRR         | 10s             | 95µs            | 159.08µs        | 19.388ms        | 135µs           | 198µs           | 479µs           | 6263.89
📋 pod-to-pod      | same-node  | TCP_RR          | 10s             | 31µs            | 50.82µs         | 14.357ms        | 48µs            | 63µs            | 126µs           | 19507.35
📋 pod-to-pod      | same-node  | UDP_RR          | 10s             | 30µs            | 48.42µs         | 6.859ms         | 46µs            | 61µs            | 115µs           | 20470.00
📋 host-to-host    | same-node  | TCP_CRR         | 10s             | 98µs            | 166.47µs        | 16.335ms        | 144µs           | 204µs           | 506µs           | 5986.98
📋 host-to-host    | same-node  | TCP_RR          | 10s             | 32µs            | 48.7µs          | 5.069ms         | 45µs            | 62µs            | 110µs           | 20346.95
📋 host-to-host    | same-node  | UDP_RR          | 10s             | 32µs            | 49.25µs         | 4.462ms         | 45µs            | 62µs            | 123µs           | 20121.02
📋 pod-to-pod      | other-node | TCP_CRR         | 10s             | 756µs           | 1.16556ms       | 12.169ms        | 1.052ms         | 1.434ms         | 3.622ms         | 857.29
📋 pod-to-pod      | other-node | TCP_RR          | 10s             | 308µs           | 474.68µs        | 13.619ms        | 421µs           | 535µs           | 1.804ms         | 2104.15
📋 pod-to-pod      | other-node | UDP_RR          | 10s             | 305µs           | 452.77µs        | 12.858ms        | 420µs           | 526µs           | 1.15ms          | 2205.79
📋 host-to-host    | other-node | TCP_CRR         | 10s             | 731µs           | 1.06169ms       | 9.363ms         | 980µs           | 1.257ms         | 2.792ms         | 940.73
📋 host-to-host    | other-node | TCP_RR          | 10s             | 289µs           | 440.75µs        | 10.215ms        | 403µs           | 506µs           | 1.168ms         | 2265.83
📋 host-to-host    | other-node | UDP_RR          | 10s             | 298µs           | 446.79µs        | 26.481ms        | 412µs           | 515µs           | 1.081ms         | 2235.16
&lt;span class="nt"&gt;--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------&lt;/span&gt;
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 Scenario        | Node       | Test               | Duration        | Throughput Mb/s
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 8197.07
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 739.78
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 11374.82
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 1033.50
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 16969.26
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 732.87
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 23484.88
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 810.38
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 1526.48
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 364.91
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 1680.45
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 518.74
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 1801.35
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 420.75
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 1809.80
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 428.54
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 8220.26
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 888.74
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 11466.74
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 845.84
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 17271.49
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 646.52
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 23601.75
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 881.83
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 1563.01
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 346.93
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 1693.54
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 512.22
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 1877.99
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 393.55
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 1851.96
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 545.90
📋 pod-to-pod      | same-node  | TCP_STREAM         | 10s             | 8175.70
📋 pod-to-pod      | same-node  | UDP_STREAM         | 10s             | 874.48
📋 pod-to-pod      | same-node  | TCP_STREAM_MULTI   | 10s             | 11698.93
📋 pod-to-pod      | same-node  | UDP_STREAM_MULTI   | 10s             | 855.51
📋 host-to-host    | same-node  | TCP_STREAM         | 10s             | 17208.02
📋 host-to-host    | same-node  | UDP_STREAM         | 10s             | 709.62
📋 host-to-host    | same-node  | TCP_STREAM_MULTI   | 10s             | 23487.27
📋 host-to-host    | same-node  | UDP_STREAM_MULTI   | 10s             | 679.89
📋 pod-to-pod      | other-node | TCP_STREAM         | 10s             | 1553.33
📋 pod-to-pod      | other-node | UDP_STREAM         | 10s             | 360.14
📋 pod-to-pod      | other-node | TCP_STREAM_MULTI   | 10s             | 1712.79
📋 pod-to-pod      | other-node | UDP_STREAM_MULTI   | 10s             | 524.76
📋 host-to-host    | other-node | TCP_STREAM         | 10s             | 1813.94
📋 host-to-host    | other-node | UDP_STREAM         | 10s             | 436.07
📋 host-to-host    | other-node | TCP_STREAM_MULTI   | 10s             | 1803.77
📋 host-to-host    | other-node | UDP_STREAM_MULTI   | 10s             | 539.33
&lt;span class="nt"&gt;----------------------------------------------------------------------------------------&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Configuration 1 – Standard Cilium (VXLAN)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-node latency: ~287ms (TCP_CRR), ~144ms (TCP_RR)&lt;/li&gt;
&lt;li&gt;Cross-node throughput: 105–430 Mb/s&lt;/li&gt;
&lt;li&gt;Same-node performance: Excellent (14–29k ops/s, 12–79 Gb/s throughput)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configuration 2 – Native Routing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-node latency: ~265ms (TCP_CRR), ~134ms (TCP_RR)&lt;/li&gt;
&lt;li&gt;Cross-node throughput: 117–583 Mb/s (modest but noticeable improvements)&lt;/li&gt;
&lt;li&gt;Same-node performance: Comparable to VXLAN setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Key Observations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Latency: Native routing consistently shaved off 7–20ms across nodes.&lt;/p&gt;

&lt;p&gt;Throughput: Gains were modest, but improvements were more visible in UDP scenarios.&lt;/p&gt;

&lt;p&gt;Simplicity: Removing VXLAN reduces encapsulation overhead and makes the datapath more transparent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Native routing in Cilium does provide measurable improvements in hybrid setups, lower latency, slightly better throughput, and a cleaner datapath.&lt;/p&gt;

&lt;p&gt;That said, the improvements are incremental rather than game-changing. Given the complexity of the workaround required, I don’t consider it production ready for now.&lt;/p&gt;

&lt;p&gt;The good news is that the Sidero community is actively enhancing KubeSpan, and future releases may support native routing out of the box. If that happens, we’ll be able to combine the security and NAT traversal of KubeSpan with the performance benefits of native routing, without custom hacks.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>networking</category>
      <category>aws</category>
      <category>linux</category>
    </item>
    <item>
      <title>Hybrid k8s cluster | Talos &amp; Kubespan | Kilo wireguard</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Tue, 25 Mar 2025 08:00:32 +0000</pubDate>
      <link>https://dev.to/bnovickovs/hybrid-k8s-cluster-talos-kubespan-kilo-wireguard-1f45</link>
      <guid>https://dev.to/bnovickovs/hybrid-k8s-cluster-talos-kubespan-kilo-wireguard-1f45</guid>
      <description>&lt;p&gt;I was interested in trying out a hybrid Kubernetes setup and exploring some use cases.&lt;/p&gt;

&lt;p&gt;This setup involves creating three control plane (master) nodes in the cloud (AWS) and booting Talos worker nodes on-premises, then connecting them to the master nodes in the cloud.&lt;/p&gt;

&lt;p&gt;When it comes to on-premises worker nodes, you have the flexibility to choose any solution that fits your needs. Whether you're using a hypervisor or even local QEMU virtual machines, the choice is entirely up to you. In this guide, I will be using Proxmox because it makes things easier for me.&lt;/p&gt;

&lt;p&gt;Repository: &lt;a href="https://github.com/kubebn/aws-talos-terraform-hybrid" rel="noopener noreferrer"&gt;https://github.com/kubebn/aws-talos-terraform-hybrid&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS master nodes provisioning
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Setup vars in &lt;code&gt;vars/dev.tfvars&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run Terraform
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vars/dev.tfvars &lt;span class="nt"&gt;-auto-approve&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;local_file&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubeconfig&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creating&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;
&lt;span class="nx"&gt;local_file&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kubeconfig&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Creation&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt; &lt;span class="nx"&gt;after&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;d3443f0dfed1dbbf0e71f99dfbf0684dc1ca8b95&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nx"&gt;Apply&lt;/span&gt; &lt;span class="nx"&gt;complete&lt;/span&gt;&lt;span class="err"&gt;!&lt;/span&gt; &lt;span class="nx"&gt;Resources&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;25&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;changed&lt;/span&gt;&lt;span class="err"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="nx"&gt;destroyed&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;

&lt;span class="nx"&gt;Outputs&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;

&lt;span class="nx"&gt;control_plane_private_ips&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tolist&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"192.168.1.135"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"192.168.2.122"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"192.168.0.157"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
&lt;span class="nx"&gt;control_plane_public_ips&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;tolist&lt;/span&gt;&lt;span class="err"&gt;(&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"18.184.164.166"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"3.122.238.249"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"3.77.57.73"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install &lt;code&gt;talosctl&lt;/code&gt; and &lt;code&gt;kubectl&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://talos.dev/install | sh
curl &lt;span class="nt"&gt;-LO&lt;/span&gt; &lt;span class="s2"&gt;"https://dl.k8s.io/release/&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-L&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; https://dl.k8s.io/release/stable.txt&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;/bin/linux/amd64/kubectl"&lt;/span&gt;
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; root &lt;span class="nt"&gt;-g&lt;/span&gt; root &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 kubectl /usr/local/bin/kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply &lt;code&gt;kubeconfig&lt;/code&gt; and &lt;code&gt;talosconfig&lt;/code&gt; files. These are generated in the same folder where the terraform apply command was executed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TALOSCONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/talosconfig"&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBECONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PWD&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/kubeconfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, you will see that the master nodes are ready, Kubespan is up, and Cilium is fully installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get node &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP        OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
aws-controlplane-1   Ready    control-plane   2m25s   v1.32.3   192.168.1.135    18.184.164.166     Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
aws-controlplane-2   Ready    control-plane   2m36s   v1.32.3   192.168.2.122    3.122.238.249      Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
aws-controlplane-3   Ready    control-plane   2m24s   v1.32.3   192.168.0.157    3.77.57.73         Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3

kubectl get po &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE     NAME                                              READY   STATUS      RESTARTS        AGE
kube-system   cilium-599jz                                      1/1     Running     0               80s
kube-system   cilium-5j6wl                                      1/1     Running     0               80s
kube-system   cilium-fkkwv                                      1/1     Running     0               80s
kube-system   cilium-install-tkfrf                              0/1     Completed   0               111s
kube-system   cilium-operator-657bdd678b-lxblc                  1/1     Running     0               80s
kube-system   coredns-578d4f8ffc-5lqfm                          1/1     Running     0               111s
kube-system   coredns-578d4f8ffc-n4hwz                          1/1     Running     0               111s
kube-system   kube-apiserver-aws-controlplane-1                 1/1     Running     0               79s
kube-system   kube-apiserver-aws-controlplane-2                 1/1     Running     0               107s
kube-system   kube-apiserver-aws-controlplane-3                 1/1     Running     0               80s
kube-system   kube-controller-manager-aws-controlplane-1        1/1     Running     2 &lt;span class="o"&gt;(&lt;/span&gt;2m11s ago&lt;span class="o"&gt;)&lt;/span&gt;   79s
kube-system   kube-controller-manager-aws-controlplane-2        1/1     Running     0               107s
kube-system   kube-controller-manager-aws-controlplane-3        1/1     Running     0               80s
kube-system   kube-scheduler-aws-controlplane-1                 1/1     Running     2 &lt;span class="o"&gt;(&lt;/span&gt;2m11s ago&lt;span class="o"&gt;)&lt;/span&gt;   79s
kube-system   kube-scheduler-aws-controlplane-2                 1/1     Running     0               107s
kube-system   kube-scheduler-aws-controlplane-3                 1/1     Running     0               80s
kube-system   talos-cloud-controller-manager-599fddb46d-9mmdk   1/1     Running     0               111s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Important Notes for Talos Machine Configuration on Master Nodes
&lt;/h2&gt;

&lt;p&gt;We want to filter out the AWS private VPC from Kubespan, as on-premise workers won't be aware of it anyway.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;kubespan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
            &lt;span class="na"&gt;filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0/0&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;!${vpc_subnet}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although, we have configured both kubelet and etcd to use the internal subnet.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;kubelet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;nodeIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;validSubnets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${vpc_subnet}&lt;/span&gt;
    &lt;span class="na"&gt;etcd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;extraArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;election-timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5000"&lt;/span&gt;
          &lt;span class="na"&gt;heartbeat-interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1000"&lt;/span&gt;
        &lt;span class="na"&gt;advertisedSubnets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;${vpc_subnet}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have added the kubelet extraArgs for certificate rotation, and Talos CCM will handle that. You can find &lt;a href="https://github.com/siderolabs/talos-cloud-controller-manager?tab=readme-ov-file#controllers" rel="noopener noreferrer"&gt;more details here.&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;kubelet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;defaultRuntimeSeccompProfileEnabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;registerWithFQDN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;extraArgs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cloud-provider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external&lt;/span&gt;
          &lt;span class="na"&gt;rotate-server-certificates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
    &lt;span class="na"&gt;externalCloudProvider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;manifests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://raw.githubusercontent.com/siderolabs/talos-cloud-controller-manager/main/docs/deploy/cloud-controller-manager.yml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DHCP/TFTP configuration in Proxmox
&lt;/h2&gt;

&lt;p&gt;We have set up networking for virtual machines and LXC containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LAN - 10.1.1.0/24&lt;/li&gt;
&lt;li&gt;Proxmox node - 10.1.1.1&lt;/li&gt;
&lt;li&gt;DHCP/tftp LXC container - 10.1.1.2&lt;/li&gt;
&lt;li&gt;Install curl and docker&lt;/li&gt;
&lt;li&gt;Download vmlinuz and initramfs.xz from Talos release repository.&lt;/li&gt;
&lt;li&gt;Copy matchbox contents into lxc container
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pct create 1000 &lt;span class="nb"&gt;local&lt;/span&gt;:vztmpl/ubuntu-24.10-standard_24.10-1_amd64.tar.zst &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--hostname&lt;/span&gt; net-lxc &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--net0&lt;/span&gt; &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;eth0,bridge&lt;span class="o"&gt;=&lt;/span&gt;vmbr0,ip&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.2/24,gw&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nameserver&lt;/span&gt; 1.1.1.1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--features&lt;/span&gt; &lt;span class="nv"&gt;keyctl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1,nesting&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--storage&lt;/span&gt; local-lvm &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--rootfs&lt;/span&gt; &lt;span class="nb"&gt;local&lt;/span&gt;:8 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ssh-public-keys&lt;/span&gt; .ssh/id_rsa.pub &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--unprivileged&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true

&lt;/span&gt;pct start 1000

ssh root@10.1.1.2

apt update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt &lt;span class="nb"&gt;install &lt;/span&gt;curl &lt;span class="nt"&gt;-y&lt;/span&gt;
curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://get.docker.com | &lt;span class="nb"&gt;sudo &lt;/span&gt;bash


curl &lt;span class="nt"&gt;-L&lt;/span&gt;  https://github.com/siderolabs/talos/releases/download/v1.9.5/initramfs-amd64.xz &lt;span class="nt"&gt;-o&lt;/span&gt; initramfs-amd64.xz
curl &lt;span class="nt"&gt;-L&lt;/span&gt;  https://github.com/siderolabs/talos/releases/download/v1.9.5/vmlinuz-amd64 &lt;span class="nt"&gt;-o&lt;/span&gt; vmlinuz-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure DHCP range, gateway, matchbox endpoint ip address in &lt;code&gt;matchbox/docker-compose.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;Start DHCP/TFTP server via docker compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;span class="nt"&gt;---&lt;/span&gt;

docker ps
CONTAINER ID   IMAGE                                               COMMAND                  CREATED       STATUS       PORTS     NAMES
9f04e46194bf   quay.io/poseidon/dnsmasq:v0.5.0-32-g4327d60-amd64   &lt;span class="s2"&gt;"/usr/sbin/dnsmasq -…"&lt;/span&gt;   2 hours ago   Up 2 hours             dnsmasq
5db45718aa0a   root-matchbox                                       &lt;span class="s2"&gt;"/matchbox -address=…"&lt;/span&gt;   2 hours ago   Up 2 hours             matchbox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connecting on-premise workers
&lt;/h2&gt;

&lt;p&gt;Talos machine configuration for workers is generated by terraform, look for &lt;code&gt;worker.yaml&lt;/code&gt;. We are going to apply this to each worker in Proxmox.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create VMs&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;id &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1001..1003&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;qm create &lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; vm&lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="nt"&gt;--memory&lt;/span&gt; 12088 &lt;span class="nt"&gt;--cores&lt;/span&gt; 3 &lt;span class="nt"&gt;--net0&lt;/span&gt; virtio,bridge&lt;span class="o"&gt;=&lt;/span&gt;vmbr0 &lt;span class="nt"&gt;--ostype&lt;/span&gt; l26 &lt;span class="nt"&gt;--scsihw&lt;/span&gt; virtio-scsi-pci &lt;span class="nt"&gt;--sata0&lt;/span&gt; lvm1:32 &lt;span class="nt"&gt;--cpu&lt;/span&gt; host &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; qm start &lt;span class="nv"&gt;$id&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Scan for Talos API open ports and extract IP addresses&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;WORKER_IPS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;nmap &lt;span class="nt"&gt;-Pn&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 50000 10.1.1.0/24 &lt;span class="nt"&gt;-vv&lt;/span&gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s1"&gt;'Discovered'&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $6}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Apply configuration to each discovered IP&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKER_IPS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; WORKER_IP&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;talosctl apply-config &lt;span class="nt"&gt;--insecure&lt;/span&gt; &lt;span class="nt"&gt;--nodes&lt;/span&gt; &lt;span class="nv"&gt;$WORKER_IP&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; worker.yaml
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Let’s take a look at the result
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get node &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP        OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
aws-controlplane-1   Ready    control-plane   2m25s   v1.32.3   192.168.1.135    18.184.164.166     Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
aws-controlplane-2   Ready    control-plane   2m36s   v1.32.3   192.168.2.122    3.122.238.249      Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
aws-controlplane-3   Ready    control-plane   2m24s   v1.32.3   192.168.0.157    3.77.57.73         Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
talos-9mf-ujc        Ready    &amp;lt;none&amp;gt;          29s     v1.32.3   10.1.1.24        &amp;lt;none&amp;gt;             Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
talos-lv8-bc7        Ready    &amp;lt;none&amp;gt;          29s     v1.32.3   10.1.1.10        &amp;lt;none&amp;gt;             Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
talos-ohr-1c3        Ready    &amp;lt;none&amp;gt;          29s     v1.32.3   10.1.1.23        &amp;lt;none&amp;gt;             Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3

talosctl get kubespanpeerstatuses &lt;span class="nt"&gt;-n&lt;/span&gt; 192.168.1.135,192.168.2.122,192.168.0.157,10.1.1.24,10.1.1.19,10.1.1.12
NODE            NAMESPACE   TYPE                 ID                                             VERSION   LABEL                ENDPOINT                STATE   RX         TX
192.168.1.135   kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;Hh2ldeBX7kuct6Bynehjdgo6xkcOlZ4yUWWQVBqHLWQ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-fgw-qpv        proxmo-publicip:53639   up      84312      69860
192.168.1.135   kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;If8ZqCQ1jp0mFV8igLGpfXpYycQSVBTSlT88YUr7eEM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-4re-kxo        proxmo-publicip:53923   up      82160      99412
192.168.1.135   kubespan    KubeSpanPeerStatus   iKXIODHF3Tx2b4JsA433j8ey9+CWTKrx5To+4lsofTg&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-7ye-8j0        proxmo-publicip:51820   up      43608      58280
192.168.1.135   kubespan    KubeSpanPeerStatus   oxU0e9yTFvN+lCGcIO4s13erkWjtIKrVzb8dX+GYLxE&lt;span class="o"&gt;=&lt;/span&gt;   26        aws-controlplane-3   3.77.57.73:51820        up      7484600    16097980
192.168.1.135   kubespan    KubeSpanPeerStatus   vAHCI1pwTbaP/LHwO0MnCGELXsEstQahS0o9WkdVK0g&lt;span class="o"&gt;=&lt;/span&gt;   26        aws-controlplane-2   3.122.238.249:51820     up      6742536    15472144
192.168.2.122   kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;Hh2ldeBX7kuct6Bynehjdgo6xkcOlZ4yUWWQVBqHLWQ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-fgw-qpv        proxmo-publicip:53639   up      178960     354776
192.168.2.122   kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;If8ZqCQ1jp0mFV8igLGpfXpYycQSVBTSlT88YUr7eEM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-4re-kxo        proxmo-publicip:53923   up      232064     897928
192.168.2.122   kubespan    KubeSpanPeerStatus   iKXIODHF3Tx2b4JsA433j8ey9+CWTKrx5To+4lsofTg&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-7ye-8j0        proxmo-publicip:51820   up      92856      254520
192.168.2.122   kubespan    KubeSpanPeerStatus   oxU0e9yTFvN+lCGcIO4s13erkWjtIKrVzb8dX+GYLxE&lt;span class="o"&gt;=&lt;/span&gt;   23        aws-controlplane-3   3.77.57.73:51820        up      1557896    1954328
192.168.2.122   kubespan    KubeSpanPeerStatus   uzi7NCL64o+ILeyqa6/Pq0UWdcVyfjWulZQB+a2Av30&lt;span class="o"&gt;=&lt;/span&gt;   27        aws-controlplane-1   18.184.164.166:51820    up      15506236   6773440
192.168.0.157   kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;Hh2ldeBX7kuct6Bynehjdgo6xkcOlZ4yUWWQVBqHLWQ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-fgw-qpv        proxmo-publicip:53639   up      261464     916524
192.168.0.157   kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;If8ZqCQ1jp0mFV8igLGpfXpYycQSVBTSlT88YUr7eEM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-4re-kxo        proxmo-publicip:53923   up      141868     277180
192.168.0.157   kubespan    KubeSpanPeerStatus   iKXIODHF3Tx2b4JsA433j8ey9+CWTKrx5To+4lsofTg&lt;span class="o"&gt;=&lt;/span&gt;   11        talos-7ye-8j0        proxmo-publicip:51820   up      172996     830504
192.168.0.157   kubespan    KubeSpanPeerStatus   uzi7NCL64o+ILeyqa6/Pq0UWdcVyfjWulZQB+a2Av30&lt;span class="o"&gt;=&lt;/span&gt;   25        aws-controlplane-1   18.184.164.166:51820    up      16126096   7507456
192.168.0.157   kubespan    KubeSpanPeerStatus   vAHCI1pwTbaP/LHwO0MnCGELXsEstQahS0o9WkdVK0g&lt;span class="o"&gt;=&lt;/span&gt;   24        aws-controlplane-2   3.122.238.249:51820     up      1954180    1557896
10.1.1.24       kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;Hh2ldeBX7kuct6Bynehjdgo6xkcOlZ4yUWWQVBqHLWQ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   10        talos-fgw-qpv        10.1.1.12:51820         up      12204      11960
10.1.1.24       kubespan    KubeSpanPeerStatus   iKXIODHF3Tx2b4JsA433j8ey9+CWTKrx5To+4lsofTg&lt;span class="o"&gt;=&lt;/span&gt;   10        talos-7ye-8j0        10.1.1.19:51820         up      16316      15624
10.1.1.24       kubespan    KubeSpanPeerStatus   oxU0e9yTFvN+lCGcIO4s13erkWjtIKrVzb8dX+GYLxE&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-3   3.77.57.73:51820        up      278988     143592
10.1.1.24       kubespan    KubeSpanPeerStatus   uzi7NCL64o+ILeyqa6/Pq0UWdcVyfjWulZQB+a2Av30&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-1   18.184.164.166:51820    up      100932     84220
10.1.1.24       kubespan    KubeSpanPeerStatus   vAHCI1pwTbaP/LHwO0MnCGELXsEstQahS0o9WkdVK0g&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-2   3.122.238.249:51820     up      897880     231472
10.1.1.19       kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;Hh2ldeBX7kuct6Bynehjdgo6xkcOlZ4yUWWQVBqHLWQ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   10        talos-fgw-qpv        10.1.1.12:51820         up      12692      12732
10.1.1.19       kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;If8ZqCQ1jp0mFV8igLGpfXpYycQSVBTSlT88YUr7eEM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   10        talos-4re-kxo        10.1.1.24:51820         up      15476      16316
10.1.1.19       kubespan    KubeSpanPeerStatus   oxU0e9yTFvN+lCGcIO4s13erkWjtIKrVzb8dX+GYLxE&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-3   3.77.57.73:51820        up      832688     176132
10.1.1.19       kubespan    KubeSpanPeerStatus   uzi7NCL64o+ILeyqa6/Pq0UWdcVyfjWulZQB+a2Av30&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-1   18.184.164.166:51820    up      59680      45224
10.1.1.19       kubespan    KubeSpanPeerStatus   vAHCI1pwTbaP/LHwO0MnCGELXsEstQahS0o9WkdVK0g&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-2   3.122.238.249:51820     up      255472     94296
10.1.1.12       kubespan    KubeSpanPeerStatus   &lt;span class="nv"&gt;If8ZqCQ1jp0mFV8igLGpfXpYycQSVBTSlT88YUr7eEM&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;   10        talos-4re-kxo        10.1.1.24:51820         up      11812      12204
10.1.1.12       kubespan    KubeSpanPeerStatus   iKXIODHF3Tx2b4JsA433j8ey9+CWTKrx5To+4lsofTg&lt;span class="o"&gt;=&lt;/span&gt;   10        talos-7ye-8j0        10.1.1.19:51820         up      12732      12840
10.1.1.12       kubespan    KubeSpanPeerStatus   oxU0e9yTFvN+lCGcIO4s13erkWjtIKrVzb8dX+GYLxE&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-3   3.77.57.73:51820        up      920660     265640
10.1.1.12       kubespan    KubeSpanPeerStatus   uzi7NCL64o+ILeyqa6/Pq0UWdcVyfjWulZQB+a2Av30&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-1   18.184.164.166:51820    up      67776      82092
10.1.1.12       kubespan    KubeSpanPeerStatus   vAHCI1pwTbaP/LHwO0MnCGELXsEstQahS0o9WkdVK0g&lt;span class="o"&gt;=&lt;/span&gt;   10        aws-controlplane-2   3.122.238.249:51820     up      355552     180240
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Let’s check if the Kubernetes networking is functioning correctly:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create ns network-test
kubectl label ns network-test pod-security.kubernetes.io/enforce&lt;span class="o"&gt;=&lt;/span&gt;privileged
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/cilium/cilium/refs/heads/main/examples/kubernetes/connectivity-check/connectivity-check.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; network-test

kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; network-test
NAME                                                     READY   STATUS    RESTARTS   AGE
echo-a-54dcdd77c-6wjnw                                   1/1     Running   0          62s
echo-b-549fdb8f8c-j4sw7                                  1/1     Running   0          61s
echo-b-host-7cfdb688b7-ppz9f                             1/1     Running   0          61s
host-to-b-multi-node-clusterip-c54bf67bf-hhm5h           1/1     Running   0          60s
host-to-b-multi-node-headless-55f66fc4c7-f8fc4           1/1     Running   0          60s
pod-to-a-5f56dc8c9b-kk6c2                                1/1     Running   0          61s
pod-to-a-allowed-cnp-5dc859fd98-pvxzj                    1/1     Running   0          61s
pod-to-a-denied-cnp-68976d7584-wm52m                     1/1     Running   0          61s
pod-to-b-intra-node-nodeport-5884978697-c5rs2            1/1     Running   0          60s
pod-to-b-multi-node-clusterip-7d65578cf5-2jh97           1/1     Running   0          61s
pod-to-b-multi-node-headless-8557d86d6f-shvzx            1/1     Running   0          61s
pod-to-b-multi-node-nodeport-7847b5df8f-9kg89            1/1     Running   0          60s
pod-to-external-1111-797c647566-666l4                    1/1     Running   0          61s
pod-to-external-fqdn-allow-google-cnp-5688c867dd-dkvpk   0/1     Running   0          61s &lt;span class="c"&gt;# can be ignored&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kubernetes Network Benchmark
&lt;/h2&gt;

&lt;p&gt;We will use &lt;a href="https://github.com/InfraBuilder/k8s-bench-suite" rel="noopener noreferrer"&gt;k8s-bench-suite&lt;/a&gt; to benchmark networking performance between the nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From the control plane to worker:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl taint nodes aws-controlplane-1 node-role.kubernetes.io/control-plane:NoSchedule-

knb &lt;span class="nt"&gt;--verbose&lt;/span&gt; &lt;span class="nt"&gt;--client-node&lt;/span&gt; aws-controlplane-1 &lt;span class="nt"&gt;--server-node&lt;/span&gt; talos-4re-kxo

&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Benchmark Results
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Name            : knb-3420914
 Date            : 2025-03-18 04:14:12 UTC
 Generator       : knb
 Version         : 1.5.0
 Server          : talos-4re-kxo
 Client          : aws-controlplane-1
 UDP Socket size : auto
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
  Discovered CPU         : Intel&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Xeon&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Gold 6240 CPU @ 2.60GHz
  Discovered Kernel      : 6.12.18-talos
  Discovered k8s version :
  Discovered MTU         : 1420
  Idle :
    bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 0 Mbit/s
    client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 13.45% &lt;span class="o"&gt;(&lt;/span&gt;user 3.81%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 2.92%, iowait 0.53%, steal 6.19%&lt;span class="o"&gt;)&lt;/span&gt;
    server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 3.28% &lt;span class="o"&gt;(&lt;/span&gt;user 1.75%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 1.53%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    client ram &lt;span class="o"&gt;=&lt;/span&gt; 985 MB
    server ram &lt;span class="o"&gt;=&lt;/span&gt; 673 MB
  Pod to pod :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 887 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 75.29% &lt;span class="o"&gt;(&lt;/span&gt;user 4.07%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 57.20%, iowait 0.29%, steal 13.73%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 36.16% &lt;span class="o"&gt;(&lt;/span&gt;user 2.06%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 34.10%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 995 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 697 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 350 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 79.95% &lt;span class="o"&gt;(&lt;/span&gt;user 6.68%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 54.20%, iowait 0.22%, steal 18.85%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 20.95% &lt;span class="o"&gt;(&lt;/span&gt;user 2.61%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 18.34%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 1000 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 653 MB
  Pod to Service :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 1063 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 80.58% &lt;span class="o"&gt;(&lt;/span&gt;user 3.89%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 68.36%, iowait 0.05%, steal 8.28%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 42.44% &lt;span class="o"&gt;(&lt;/span&gt;user 2.26%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 40.18%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 1008 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 696 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 322 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 78.57% &lt;span class="o"&gt;(&lt;/span&gt;user 6.24%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 57.02%, iowait 0.18%, steal 15.13%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 21.02% &lt;span class="o"&gt;(&lt;/span&gt;user 2.54%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 18.48%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 995 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 668 MB
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;From worker to worker locally:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;knb &lt;span class="nt"&gt;--verbose&lt;/span&gt; &lt;span class="nt"&gt;--client-node&lt;/span&gt; talos-7ye-8j0 &lt;span class="nt"&gt;--server-node&lt;/span&gt; talos-4re-kxo

&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Benchmark Results
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Name            : knb-3423407
 Date            : 2025-03-18 04:16:29 UTC
 Generator       : knb
 Version         : 1.5.0
 Server          : talos-4re-kxo
 Client          : talos-7ye-8j0
 UDP Socket size : auto
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
  Discovered CPU         : Intel&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Xeon&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Gold 6240 CPU @ 2.60GHz
  Discovered Kernel      : 6.12.18-talos
  Discovered k8s version :
  Discovered MTU         : 1420
  Idle :
    bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 0 Mbit/s
    client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 2.97% &lt;span class="o"&gt;(&lt;/span&gt;user 1.50%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 1.47%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 3.34% &lt;span class="o"&gt;(&lt;/span&gt;user 1.72%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 1.62%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    client ram &lt;span class="o"&gt;=&lt;/span&gt; 552 MB
    server ram &lt;span class="o"&gt;=&lt;/span&gt; 688 MB
  Pod to pod :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 1868 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 61.44% &lt;span class="o"&gt;(&lt;/span&gt;user 2.59%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 58.82%, iowait 0.03%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 70.40% &lt;span class="o"&gt;(&lt;/span&gt;user 2.82%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 67.58%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 546 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 801 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 912 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 64.14% &lt;span class="o"&gt;(&lt;/span&gt;user 3.75%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.06%, system 60.30%, iowait 0.03%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 52.89% &lt;span class="o"&gt;(&lt;/span&gt;user 4.29%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 48.60%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 556 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 679 MB
  Pod to Service :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 1907 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 58.43% &lt;span class="o"&gt;(&lt;/span&gt;user 2.15%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 56.25%, iowait 0.00%, steal 0.03%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 61.83% &lt;span class="o"&gt;(&lt;/span&gt;user 2.35%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.03%, system 59.45%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 547 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 813 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 887 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 57.50% &lt;span class="o"&gt;(&lt;/span&gt;user 3.88%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 53.62%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 52.33% &lt;span class="o"&gt;(&lt;/span&gt;user 4.18%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.03%, system 48.12%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 556 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 685 MB
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Interestingly, all our traffic is actually routed through the Kubespan/WireGuard tunnel. For comparison, I created a new local cluster without Kubespan, and the results were different:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k get node &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME            STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
talos-pft-tax   Ready    control-plane   52s   v1.32.0   10.1.1.17     &amp;lt;none&amp;gt;        Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.1&lt;span class="o"&gt;)&lt;/span&gt;   6.12.6-talos     containerd://2.0.1
talos-tfy-nig   Ready    &amp;lt;none&amp;gt;          49s   v1.32.0   10.1.1.16     &amp;lt;none&amp;gt;        Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.1&lt;span class="o"&gt;)&lt;/span&gt;   6.12.6-talos     containerd://2.0.1

knb &lt;span class="nt"&gt;--verbose&lt;/span&gt; &lt;span class="nt"&gt;--client-node&lt;/span&gt; talos-pft-tax &lt;span class="nt"&gt;--server-node&lt;/span&gt; talos-tfy-nig

&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Benchmark Results
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Name            : knb-3432162
 Date            : 2025-03-18 04:32:21 UTC
 Generator       : knb
 Version         : 1.5.0
 Server          : talos-tfy-nig
 Client          : talos-pft-tax
 UDP Socket size : auto
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
  Discovered CPU         : QEMU Virtual CPU version 2.5+
  Discovered Kernel      : 6.12.6-talos
  Discovered k8s version :
  Discovered MTU         : 1450
  Idle :
    bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 0 Mbit/s
    client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 3.56% &lt;span class="o"&gt;(&lt;/span&gt;user 1.85%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 1.62%, iowait 0.09%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 1.41% &lt;span class="o"&gt;(&lt;/span&gt;user 0.59%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 0.82%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    client ram &lt;span class="o"&gt;=&lt;/span&gt; 775 MB
    server ram &lt;span class="o"&gt;=&lt;/span&gt; 401 MB
  Pod to pod :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 6276 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 23.32% &lt;span class="o"&gt;(&lt;/span&gt;user 2.97%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 20.24%, iowait 0.11%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 25.49% &lt;span class="o"&gt;(&lt;/span&gt;user 2.13%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 23.36%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 709 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 391 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 861 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 55.67% &lt;span class="o"&gt;(&lt;/span&gt;user 6.55%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 49.02%, iowait 0.05%, steal 0.05%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 46.81% &lt;span class="o"&gt;(&lt;/span&gt;user 7.85%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 38.96%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 711 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 382 MB
  Pod to Service :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 6326 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 24.26% &lt;span class="o"&gt;(&lt;/span&gt;user 4.05%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 20.11%, iowait 0.10%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 25.99% &lt;span class="o"&gt;(&lt;/span&gt;user 2.22%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 23.77%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 693 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 333 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 877 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 52.58% &lt;span class="o"&gt;(&lt;/span&gt;user 7.26%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 45.27%, iowait 0.05%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 46.81% &lt;span class="o"&gt;(&lt;/span&gt;user 7.76%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 39.05%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 705 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 334 MB
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although Kubespan works well out of the box, it does not yet support meshed topologies, where you need to control how traffic is routed for specific node pools.&lt;/p&gt;

&lt;p&gt;There is already an open issue for this, &lt;a href="https://github.com/siderolabs/talos/issues/8364" rel="noopener noreferrer"&gt;which you can find here.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Kilo
&lt;/h2&gt;

&lt;p&gt;A solution that supports meshed logical topologies - &lt;a href="https://kilo.squat.ai/" rel="noopener noreferrer"&gt;Kilo&lt;/a&gt;. It enables you to manage traffic between nodes in multiple datacenters while keeping native networking intact within each datacenter, &lt;a href="https://kilo.squat.ai/docs/topology#logical-groups" rel="noopener noreferrer"&gt;for intra-datacenter communication.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The downside is that Kilo requires some customizations, which means additional logic and automation would need to be applied.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apply terraform using Talos machine configuration for Kilo
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;kilo-controlplane.tpl&lt;/code&gt; file has Kubespan disabled and kube-proxy enabled. For the CNI, we deploy Kilo, and we also add a CustomResourceDefinition for &lt;code&gt;peers.kilo.squat.ai&lt;/code&gt; in the &lt;code&gt;inlineManifests&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Update the paths in the &lt;code&gt;talos.tf&lt;/code&gt; file by changing &lt;code&gt;controlplane.tpl&lt;/code&gt; and &lt;code&gt;worker.tpl&lt;/code&gt; to &lt;code&gt;kilo-controlplane.tpl&lt;/code&gt; and &lt;code&gt;kilo-worker.tpl&lt;/code&gt;, respectively.&lt;/p&gt;

&lt;p&gt;We follow the same process for spinning up the nodes in both AWS and Proxmox.&lt;/p&gt;

&lt;p&gt;Once the masters are ready, Kilo will fail because it can't find the kubeconfig access. This issue arises because we are using Talos instead of plain kubeadm. To fix this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create configmap kube-proxy &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;kubeconfig.conf&lt;span class="o"&gt;=&lt;/span&gt;kubeconfig &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system

kubectl get node &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME                 STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP      OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
aws-controlplane-1   Ready    control-plane   15m     v1.32.3   192.168.1.55   3.73.119.119     Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
aws-controlplane-2   Ready    control-plane   16m     v1.32.3   192.168.0.75   18.195.244.87    Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
aws-controlplane-3   Ready    control-plane   16m     v1.32.3   192.168.2.68   18.185.241.183   Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
talos-8vs-lte        Ready    &amp;lt;none&amp;gt;          2m20s   v1.32.3   10.1.1.22      &amp;lt;none&amp;gt;           Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
talos-8wf-r6g        Ready    &amp;lt;none&amp;gt;          2m19s   v1.32.3   10.1.1.21      &amp;lt;none&amp;gt;           Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3
talos-ub5-bc2        Ready    &amp;lt;none&amp;gt;          2m14s   v1.32.3   10.1.1.23      &amp;lt;none&amp;gt;           Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.9.5&lt;span class="o"&gt;)&lt;/span&gt;   6.12.18-talos    containerd://2.0.3

...
kube-system   kilo-286hz  1/1     Running            0             6m46s   10.1.1.22      talos-8vs-lte              &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kilo-4h8kn  1/1     Running            0             12m     10.1.1.21      talos-8wf-r6g              &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kilo-gcclp  1/1     Running            0             13m     192.168.2.68   aws-controlplane-3         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
kube-system   kilo-rq2sl  1/1     Running            0             4m17s   192.168.1.55   aws-controlplane-1         &amp;lt;none&amp;gt;           &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to specify the topology, set logical locations, and ensure that at least one node in each location has an IP address that is routable from the other locations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For aws control planes (location):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;node &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get nodes | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; aws | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $1}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;kubectl annotate node &lt;span class="nv"&gt;$node&lt;/span&gt; kilo.squat.ai/location&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"aws"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For workers (location):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl annotate node talos-8vs-lte talos-8wf-r6g talos-ub5-bc2 kilo.squat.ai/location&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"on-prem"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Endpoint for each location:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl annotate node talos-8vs-lte kilo.squat.ai/force-endpoint&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"proxmox-public-ip:51820"&lt;/span&gt;

kubectl annotate node aws-controlplane-1 kilo.squat.ai/force-endpoint&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"3.73.119.119:51820"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Rolling out and checking the network again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout restart ds/kilo &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system

kubectl get po &lt;span class="nt"&gt;-n&lt;/span&gt; network-test
NAME                                                     READY   STATUS    RESTARTS   AGE
echo-a-54dcdd77c-psgqb                                   1/1     Running   0          34s
echo-b-549fdb8f8c-5pjbk                                  1/1     Running   0          34s
echo-b-host-7cfdb688b7-zff5b                             1/1     Running   0          34s
host-to-b-multi-node-clusterip-c54bf67bf-f7d6c           1/1     Running   0          33s
host-to-b-multi-node-headless-55f66fc4c7-kd2wv           1/1     Running   0          33s
pod-to-a-5f56dc8c9b-64k7v                                1/1     Running   0          34s
pod-to-a-allowed-cnp-5dc859fd98-684lh                    1/1     Running   0          34s
pod-to-a-denied-cnp-68976d7584-6w6fn                     1/1     Running   0          34s
pod-to-b-intra-node-nodeport-5884978697-ddtz4            1/1     Running   0          32s
pod-to-b-multi-node-clusterip-7d65578cf5-rx9ws           1/1     Running   0          33s
pod-to-b-multi-node-headless-8557d86d6f-8dqr8            1/1     Running   0          33s
pod-to-b-multi-node-nodeport-7847b5df8f-zjlzt            1/1     Running   0          33s
pod-to-external-1111-797c647566-qnc22                    1/1     Running   0          34s
pod-to-external-fqdn-allow-google-cnp-5688c867dd-9btvt   1/1     Running   0          33s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kilo knb benchmark
&lt;/h2&gt;

&lt;p&gt;Let’s test the network performance by running the same test between the master and worker nodes, as well as between worker nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Master to worker:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl taint nodes aws-controlplane-1 node-role.kubernetes.io/control-plane:NoSchedule-
knb &lt;span class="nt"&gt;--verbose&lt;/span&gt; &lt;span class="nt"&gt;--client-node&lt;/span&gt; aws-controlplane-1 &lt;span class="nt"&gt;--server-node&lt;/span&gt; talos-8vs-lte

&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Benchmark Results
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Name            : knb-3461466
 Date            : 2025-03-18 05:25:08 UTC
 Generator       : knb
 Version         : 1.5.0
 Server          : talos-8vs-lte
 Client          : aws-controlplane-1
 UDP Socket size : auto
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
  Discovered CPU         : Intel&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Xeon&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Gold 6240 CPU @ 2.60GHz
  Discovered Kernel      : 6.12.18-talos
  Discovered k8s version :
  Discovered MTU         : 1420
  Idle :
    bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 0 Mbit/s
    client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 15.67% &lt;span class="o"&gt;(&lt;/span&gt;user 4.70%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 2.74%, iowait 0.44%, steal 7.79%&lt;span class="o"&gt;)&lt;/span&gt;
    server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 2.82% &lt;span class="o"&gt;(&lt;/span&gt;user 1.41%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 1.41%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    client ram &lt;span class="o"&gt;=&lt;/span&gt; 799 MB
    server ram &lt;span class="o"&gt;=&lt;/span&gt; 417 MB
  Pod to pod :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 942 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 61.10% &lt;span class="o"&gt;(&lt;/span&gt;user 5.12%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 36.35%, iowait 0.39%, steal 19.24%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 22.78% &lt;span class="o"&gt;(&lt;/span&gt;user 1.35%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 21.43%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 787 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 479 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 448 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 75.97% &lt;span class="o"&gt;(&lt;/span&gt;user 6.76%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 49.81%, iowait 0.28%, steal 19.12%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 18.07% &lt;span class="o"&gt;(&lt;/span&gt;user 2.96%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 15.11%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 798 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 391 MB
  Pod to Service :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 1253 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 69.80% &lt;span class="o"&gt;(&lt;/span&gt;user 3.58%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 45.60%, iowait 0.36%, steal 20.26%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 30.59% &lt;span class="o"&gt;(&lt;/span&gt;user 1.88%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 28.71%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 787 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 592 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 478 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 80.83% &lt;span class="o"&gt;(&lt;/span&gt;user 6.97%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 53.98%, iowait 0.25%, steal 19.63%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 18.77% &lt;span class="o"&gt;(&lt;/span&gt;user 2.70%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.03%, system 16.04%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 795 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 391 MB
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Worker to worker:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;knb &lt;span class="nt"&gt;--verbose&lt;/span&gt; &lt;span class="nt"&gt;--client-node&lt;/span&gt; talos-8vs-lte &lt;span class="nt"&gt;--server-node&lt;/span&gt; talos-8wf-r6g

&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Benchmark Results
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
 Name            : knb-3467118
 Date            : 2025-03-18 05:27:23 UTC
 Generator       : knb
 Version         : 1.5.0
 Server          : talos-8wf-r6g
 Client          : talos-8vs-lte
 UDP Socket size : auto
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
  Discovered CPU         : Intel&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Xeon&lt;span class="o"&gt;(&lt;/span&gt;R&lt;span class="o"&gt;)&lt;/span&gt; Gold 6240 CPU @ 2.60GHz
  Discovered Kernel      : 6.12.18-talos
  Discovered k8s version :
  Discovered MTU         : 1420
  Idle :
    bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 0 Mbit/s
    client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 2.57% &lt;span class="o"&gt;(&lt;/span&gt;user 1.29%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 1.28%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 2.05% &lt;span class="o"&gt;(&lt;/span&gt;user 1.10%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 0.95%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
    client ram &lt;span class="o"&gt;=&lt;/span&gt; 416 MB
    server ram &lt;span class="o"&gt;=&lt;/span&gt; 521 MB
  Pod to pod :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 8409 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 17.34% &lt;span class="o"&gt;(&lt;/span&gt;user 1.85%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 15.49%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 22.70% &lt;span class="o"&gt;(&lt;/span&gt;user 1.96%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 20.74%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 398 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 508 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 1403 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 36.43% &lt;span class="o"&gt;(&lt;/span&gt;user 3.27%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 33.16%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 35.20% &lt;span class="o"&gt;(&lt;/span&gt;user 5.36%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 29.84%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 405 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 542 MB
  Pod to Service :
    TCP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 8366 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 21.10% &lt;span class="o"&gt;(&lt;/span&gt;user 1.64%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.04%, system 19.42%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 22.15% &lt;span class="o"&gt;(&lt;/span&gt;user 1.85%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 20.30%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 398 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 515 MB
    UDP :
      bandwidth &lt;span class="o"&gt;=&lt;/span&gt; 1349 Mbit/s
      client cpu &lt;span class="o"&gt;=&lt;/span&gt; total 36.52% &lt;span class="o"&gt;(&lt;/span&gt;user 3.21%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 33.31%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      server cpu &lt;span class="o"&gt;=&lt;/span&gt; total 34.03% &lt;span class="o"&gt;(&lt;/span&gt;user 5.38%, &lt;span class="nb"&gt;nice &lt;/span&gt;0.00%, system 28.65%, iowait 0.00%, steal 0.00%&lt;span class="o"&gt;)&lt;/span&gt;
      client ram &lt;span class="o"&gt;=&lt;/span&gt; 414 MB
      server ram &lt;span class="o"&gt;=&lt;/span&gt; 535 MB
&lt;span class="o"&gt;=========================================================&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;We can see that the nodes are aware of each other and use the internal connection based on their logical location, as demonstrated in the diagram below &lt;em&gt;&lt;a href="https://kilo.squat.ai/docs/kgctl#graph" rel="noopener noreferrer"&gt;Generated using this &lt;/a&gt;&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd20tttwr86bfjtquc55b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd20tttwr86bfjtquc55b.png" alt=" " width="779" height="725"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To conclude, since it's not possible to set annotations in advance for the nodes, automating the process completely when Talos/Kubeadm is bootstrapped could be problematic. This would still require an additional layer to set up the mesh. From this perspective, Kubespan is much easier to implement, but it currently doesn't support logical separation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delete cluster
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Delete VMs&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;id &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1001..1003&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;qm stop &lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; qm destroy &lt;span class="nv"&gt;$id&lt;/span&gt;
&lt;span class="k"&gt;done

&lt;/span&gt;terraform destroy &lt;span class="nt"&gt;-auto-approve&lt;/span&gt; &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;vars/dev.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/isovalent/terraform-aws-talos" rel="noopener noreferrer"&gt;https://github.com/isovalent/terraform-aws-talos&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/pfenerty/talos-aws-terraform" rel="noopener noreferrer"&gt;https://github.com/pfenerty/talos-aws-terraform&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/sergelogvinov/kubernetes-on-hybrid-cloud-talos-network-51lo"&gt;https://dev.to/sergelogvinov/kubernetes-on-hybrid-cloud-talos-network-51lo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/sergelogvinov/kubernetes-on-hybrid-cloud-network-design-3m9f"&gt;https://dev.to/sergelogvinov/kubernetes-on-hybrid-cloud-network-design-3m9f&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.talos.dev/v1.9/talos-guides/network/kubespan/" rel="noopener noreferrer"&gt;https://www.talos.dev/v1.9/talos-guides/network/kubespan/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kilo.squat.ai/" rel="noopener noreferrer"&gt;https://kilo.squat.ai/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>networking</category>
      <category>cloud</category>
      <category>proxmox</category>
    </item>
    <item>
      <title>Kubernetes as Bare Metal Service utilizing Sidero Metal &amp; Talos</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Fri, 14 Mar 2025 09:23:15 +0000</pubDate>
      <link>https://dev.to/bnovickovs/kubernetes-as-bare-metal-service-utilizing-sidero-metal-2a1n</link>
      <guid>https://dev.to/bnovickovs/kubernetes-as-bare-metal-service-utilizing-sidero-metal-2a1n</guid>
      <description>&lt;p&gt;The following setup includes the process of creating 3 control planes (master) nodes and 4 worker machines created dynamically, on bare metal servers.&lt;/p&gt;

&lt;p&gt;We simulate a scenario that DC provided us the metal machines. Booting Talos over the network on bare-metal with PXE &amp;amp; Sidero Cluster API and connect them together.&lt;/p&gt;

&lt;p&gt;We are going to deploy DHCP server but there is no need for next-server boot and TFTP because it's already implemented in Sidero controller-manager, including DHCP proxy. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Sidero v0.6 comes with DHCP proxy which augments the DHCP service provided by the network environment with PXE boot instructions automatically. There is no configuration required besides configuring the network environment DHCP server to assign IPs to the machines.&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;dnsmasq configuration would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;services:
  dnsmasq:
    image: quay.io/poseidon/dnsmasq:v0.5.0-32-g4327d60-amd64
    container_name: dnsmasq
    cap_add:
      - NET_ADMIN
    network_mode: host
    &lt;span class="nb"&gt;command&lt;/span&gt;: &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="nt"&gt;-p0&lt;/span&gt;
      &lt;span class="nt"&gt;--dhcp-range&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.3,10.1.1.30
      &lt;span class="nt"&gt;--dhcp-option&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;option:router,10.1.1.1
      &lt;span class="nt"&gt;--log-queries&lt;/span&gt;
      &lt;span class="nt"&gt;--log-dhcp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Management Plane cluster
&lt;/h3&gt;

&lt;p&gt;In order to run Sidero, you first need a Kubernetes “Management cluster”.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes v1.26 or later&lt;/li&gt;
&lt;li&gt;Ability to expose TCP and UDP Services to the workload cluster machines&lt;/li&gt;
&lt;li&gt;Access to the cluster: we deploy 1 node Talos cluster so access can be achieve via &lt;code&gt;talosctl kubeconfig&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We create a one node cluster with &lt;code&gt;allowSchedulingOnControlPlanes: true&lt;/code&gt;, which allows running workload on control-plane nodes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k get node &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME            STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION   CONTAINER-RUNTIME
talos-74m-2r5   Ready    control-plane   11m   v1.31.2   10.1.1.18     &amp;lt;none&amp;gt;        Talos &lt;span class="o"&gt;(&lt;/span&gt;v1.8.3&lt;span class="o"&gt;)&lt;/span&gt;   6.6.60-talos     containerd://2.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sidero Cluster API
&lt;/h3&gt;

&lt;p&gt;Sidero is included as a default infrastructure provider in clusterctl, so the installation of both Sidero and the Cluster API (CAPI) components is as simple as using the clusterctl tool.&lt;/p&gt;

&lt;p&gt;First, we are telling Sidero to use &lt;code&gt;hostNetwork: true&lt;/code&gt; so that it binds its ports directly to the host, rather than being available only from inside the cluster. There are many ways of exposing the services, but this is the simplest path for the single-node management cluster. When you scale the management cluster, you will need to use an alternative method, such as an external load balancer or something like &lt;code&gt;MetalLB&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SIDERO_CONTROLLER_MANAGER_HOST_NETWORK&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true
export &lt;/span&gt;&lt;span class="nv"&gt;SIDERO_CONTROLLER_MANAGER_DEPLOYMENT_STRATEGY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Recreate
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SIDERO_CONTROLLER_MANAGER_API_ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.18
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;SIDERO_CONTROLLER_MANAGER_SIDEROLINK_ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.18

clusterctl init &lt;span class="nt"&gt;-b&lt;/span&gt; talos &lt;span class="nt"&gt;-c&lt;/span&gt; talos &lt;span class="nt"&gt;-i&lt;/span&gt; sidero

k get po
NAMESPACE       NAME                                         READY   STATUS    RESTARTS      AGE
cabpt-system    cabpt-controller-manager-6b8b989d68-lwxbw    1/1     Running   0             39h
cacppt-system   cacppt-controller-manager-858fccc654-xzfds   1/1     Running   0             39h
capi-system     capi-controller-manager-564745d4b-hbh7x      1/1     Running   0             39h
cert-manager    cert-manager-5c887c889d-dflnl                1/1     Running   0             39h
cert-manager    cert-manager-cainjector-58f6855565-5wf5z     1/1     Running   0             39h
cert-manager    cert-manager-webhook-6647d6545d-k7qhf        1/1     Running   0             39h
sidero-system   caps-controller-manager-67f75b9cb-9z2fq      1/1     Running   0             39h
sidero-system   sidero-controller-manager-97cb45f57-v7cv2    4/4     Running   0             39h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; http://10.1.1.18:8081/tftp/snp.efi
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 1020416
Content-Type: application/octet-stream
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Environment
&lt;/h4&gt;

&lt;p&gt;Environments are a custom resource provided by the Metal Controller Manager. An environment is a codified description of what should be returned by the PXE server when a physical server attempts to PXE boot.&lt;/p&gt;

&lt;p&gt;Environments can be supplied to a given server either at the Server or the ServerClass level. The hierarchy from most to least respected is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;.spec.environmentRef provided at Server level&lt;/span&gt;
&lt;span class="s"&gt;.spec.environmentRef provided at ServerClass level&lt;/span&gt;
&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default"&lt;/span&gt; &lt;span class="s"&gt;Environment created automatically and modified by an administrator&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl edit environment default

apiVersion: metal.sidero.dev/v1alpha2
kind: Environment
metadata:
  creationTimestamp: &lt;span class="s2"&gt;"2024-11-23T13:16:12Z"&lt;/span&gt;
  generation: 1
  name: default
  resourceVersion: &lt;span class="s2"&gt;"6527"&lt;/span&gt;
  uid: 9e069ed5-886c-4b3c-9875-fe8e7f453dda
spec:
  initrd:
    url: https://github.com/siderolabs/talos/releases/download/v1.8.3/initramfs-amd64.xz
  kernel:
    args:
    - &lt;span class="nv"&gt;console&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tty0
    - &lt;span class="nv"&gt;console&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ttyS0
    - &lt;span class="nv"&gt;consoleblank&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
    - &lt;span class="nv"&gt;earlyprintk&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ttyS0
    - &lt;span class="nv"&gt;ima_appraise&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;fix
    - &lt;span class="nv"&gt;ima_hash&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sha512
    - &lt;span class="nv"&gt;ima_template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ima-ng
    - &lt;span class="nv"&gt;init_on_alloc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1
    - &lt;span class="nv"&gt;initrd&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;initramfs.xz
    - nvme_core.io_timeout&lt;span class="o"&gt;=&lt;/span&gt;4294967295
    - printk.devkmsg&lt;span class="o"&gt;=&lt;/span&gt;on
    - &lt;span class="nv"&gt;pti&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;on
    - &lt;span class="nv"&gt;slab_nomerge&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
    - talos.platform&lt;span class="o"&gt;=&lt;/span&gt;metal
    url: https://github.com/siderolabs/talos/releases/download/v1.8.3/vmlinuz-amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Servers and ServerClasses
&lt;/h4&gt;

&lt;p&gt;Servers are the basic resource of bare metal in the Metal Controller Manager. These are created by PXE booting the servers and allowing them to send a registration request to the management plane.&lt;/p&gt;

&lt;p&gt;Server classes are a way to group distinct server resources. The qualifiers and selector keys allow the administrator to specify criteria upon which to group these servers.&lt;/p&gt;

&lt;p&gt;So here we are creating 2 ServerClasses for masters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metal.sidero.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServerClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;masters&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;qualifiers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;hardware&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;system&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;manufacturer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;QEMU&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;totalSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;12&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;GB"&lt;/span&gt;
  &lt;span class="na"&gt;configPatches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/interfaces&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deviceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;busPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0*"&lt;/span&gt;
           &lt;span class="na"&gt;dhcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
           &lt;span class="na"&gt;vip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.1.1.50"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/nameservers&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.1.1.1&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.0.0.1&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/install&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
        &lt;span class="na"&gt;diskSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;100GB'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/cluster/network/cni&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
        &lt;span class="c1"&gt;# name: "custom"&lt;/span&gt;
        &lt;span class="c1"&gt;# urls:&lt;/span&gt;
        &lt;span class="c1"&gt;#   - "https://raw.githubusercontent.com/kubebn/talos-proxmox-kaas/main/manifests/talos/cilium.yaml"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/cluster/proxy&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/kubelet/extraArgs&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;rotate-server-certificates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/cluster/inlineManifests&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
          &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt; 
            &lt;span class="s"&gt;apiVersion: v1&lt;/span&gt;
            &lt;span class="s"&gt;kind: Namespace&lt;/span&gt;
            &lt;span class="s"&gt;metadata:&lt;/span&gt;
                &lt;span class="s"&gt;name: cilium&lt;/span&gt;
                &lt;span class="s"&gt;labels:&lt;/span&gt;
                  &lt;span class="s"&gt;pod-security.kubernetes.io/enforce: "privileged"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/cluster/extraManifests&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://raw.githubusercontent.com/alex1989hu/kubelet-serving-cert-approver/main/deploy/standalone-install.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and workers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metal.sidero.dev/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServerClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;workers&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;qualifiers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;hardware&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;system&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;manufacturer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;QEMU&lt;/span&gt;
        &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;totalSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;19&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;GB"&lt;/span&gt;
  &lt;span class="na"&gt;configPatches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/interfaces&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deviceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;busPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0*"&lt;/span&gt;
           &lt;span class="na"&gt;dhcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/nameservers&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.1.1.1&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;1.0.0.1&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/install&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;disk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
        &lt;span class="na"&gt;diskSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;100GB'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/cluster/proxy&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;disabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/kubelet/extraArgs&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;rotate-server-certificates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's spin up machines:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 3 nodes with 12GB memory&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;id &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;105..107&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;qm create &lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; vm&lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="nt"&gt;--memory&lt;/span&gt; 12288 &lt;span class="nt"&gt;--cores&lt;/span&gt; 3 &lt;span class="nt"&gt;--net0&lt;/span&gt; virtio,bridge&lt;span class="o"&gt;=&lt;/span&gt;vmbr0 &lt;span class="nt"&gt;--ostype&lt;/span&gt; l26 &lt;span class="nt"&gt;--scsihw&lt;/span&gt; virtio-scsi-pci &lt;span class="nt"&gt;--sata0&lt;/span&gt; lvm1:32 &lt;span class="nt"&gt;--cpu&lt;/span&gt; host &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; qm start &lt;span class="nv"&gt;$id&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;

&lt;span class="c"&gt;# 4 nodes with 19GB memory&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;id &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;108..111&lt;span class="o"&gt;}&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;qm create &lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; vm&lt;span class="nv"&gt;$id&lt;/span&gt; &lt;span class="nt"&gt;--memory&lt;/span&gt; 20288 &lt;span class="nt"&gt;--cores&lt;/span&gt; 3 &lt;span class="nt"&gt;--net0&lt;/span&gt; virtio,bridge&lt;span class="o"&gt;=&lt;/span&gt;vmbr0 &lt;span class="nt"&gt;--ostype&lt;/span&gt; l26 &lt;span class="nt"&gt;--scsihw&lt;/span&gt; virtio-scsi-pci &lt;span class="nt"&gt;--sata0&lt;/span&gt; lvm1:32 &lt;span class="nt"&gt;--cpu&lt;/span&gt; host &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; qm start &lt;span class="nv"&gt;$id&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So in ServerClass we use the difference between memory allocations for the nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    hardware:
      - system:
          manufacturer: QEMU
        memory:
          totalSize: &lt;span class="s2"&gt;"12 GB"&lt;/span&gt; &lt;span class="c"&gt;# masters&lt;/span&gt;
          &lt;span class="nt"&gt;---&lt;/span&gt;
          totalSize: &lt;span class="s2"&gt;"19 GB"&lt;/span&gt; &lt;span class="c"&gt;# workers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get serverclasses
NAME      AVAILABLE   IN USE   AGE
any       &lt;span class="o"&gt;[]&lt;/span&gt;          &lt;span class="o"&gt;[]&lt;/span&gt;       18m
masters   &lt;span class="o"&gt;[]&lt;/span&gt;          &lt;span class="o"&gt;[]&lt;/span&gt;       9s
workers   &lt;span class="o"&gt;[]&lt;/span&gt;          &lt;span class="o"&gt;[]&lt;/span&gt;       9s


kubectl get servers
NAME                                   HOSTNAME   ACCEPTED   CORDONED   ALLOCATED   CLEAN   POWER   AGE
13f56641-ff59-467c-94df-55a2861146d9   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      96s
26f39da5-c622-42e0-b160-ff0eb58eb56b   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      75s
4e56c769-8a35-4a68-b90b-0e1dca530fb0   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      107s
5211912d-8f32-4ea4-8738-aaff57386391   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      96s
a0b83613-faa9-468a-9289-1aa270117d54   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      104s
a6c8afca-15b9-4254-82b4-91bbcd76dba0   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      96s
ec39bf0e-632d-4dca-9ae0-0b3509368de6   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      108s
f426397a-76ff-4ea6-815a-2be97265f5e6   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      107s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can describe the server to see other details of it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl get server 10ea52da-e1fc-4b83-81ef-b9cd40d1d25e -o yaml&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metal.sidero.dev/v1alpha2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Server&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2024-11-25T04:26:35Z"&lt;/span&gt;
  &lt;span class="na"&gt;finalizers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;storage.finalizers.server.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;generation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10ea52da-e1fc-4b83-81ef-b9cd40d1d25e&lt;/span&gt;
  &lt;span class="na"&gt;resourceVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;562154"&lt;/span&gt;
  &lt;span class="na"&gt;uid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4c08c327-8aba-4af0-8204-5985b2a76e95&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;accepted&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;hardware&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;compute&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;processorCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;coreCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
        &lt;span class="na"&gt;manufacturer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;QEMU&lt;/span&gt;
        &lt;span class="na"&gt;productName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pc-i440fx-9.0&lt;/span&gt;
        &lt;span class="na"&gt;speed&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2000&lt;/span&gt;
        &lt;span class="na"&gt;threadCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;totalCoreCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;totalThreadCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;moduleCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;modules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;manufacturer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;QEMU&lt;/span&gt;
        &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;12288&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ROM&lt;/span&gt;
      &lt;span class="na"&gt;totalSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;12 GB&lt;/span&gt;
    &lt;span class="na"&gt;network&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;interfaceCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;interfaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;flags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;broadcast|multicast&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
        &lt;span class="na"&gt;mac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;36:d0:4f:23:f7:03&lt;/span&gt;
        &lt;span class="na"&gt;mtu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1500&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bond0&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;flags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;broadcast&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
        &lt;span class="na"&gt;mac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;d6:9b:8b:99:6f:d0&lt;/span&gt;
        &lt;span class="na"&gt;mtu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1500&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dummy0&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.1.1.6/24&lt;/span&gt;
        &lt;span class="na"&gt;flags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;up|broadcast|multicast&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;
        &lt;span class="na"&gt;mac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bc:24:11:a1:77:25&lt;/span&gt;
        &lt;span class="na"&gt;mtu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1500&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eth0&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;deviceCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;devices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deviceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/dev/sda&lt;/span&gt;
        &lt;span class="na"&gt;productName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;QEMU HARDDISK&lt;/span&gt;
        &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;34359738368&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HDD&lt;/span&gt;
        &lt;span class="na"&gt;wwid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t10.ATA     QEMU HARDDISK                           QM00005&lt;/span&gt;
      &lt;span class="na"&gt;totalSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;32 GB&lt;/span&gt;
    &lt;span class="na"&gt;system&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;manufacturer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;QEMU&lt;/span&gt;
      &lt;span class="na"&gt;productName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Standard PC (i440FX + PIIX, 1996)&lt;/span&gt;
      &lt;span class="na"&gt;uuid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10ea52da-e1fc-4b83-81ef-b9cd40d1d25e&lt;/span&gt;
  &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;(none)&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;addresses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.1.1.6&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;InternalIP&lt;/span&gt;
  &lt;span class="na"&gt;power&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;on"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note in the output above that the newly registered servers are not accepted. In order for a server to be eligible for consideration, it must be marked as accepted. Before a Server is accepted, no write action will be performed against it. This default is for safety (don’t accidentally delete something just because it was plugged in) and security (make sure you know the machine before it is given credentials to communicate).&lt;/p&gt;

&lt;p&gt;There are two ways to accept the server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch server 00000000-0000-0000-0000-d05099d33360 &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'json'&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'[{"op": "replace", "path": "/spec/accepted", "value": true}]'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or you can enable auto-acceptance by passing the --auto-accept-servers=true flag to sidero-controller-manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl edit deploy sidero-controller-manager &lt;span class="nt"&gt;-n&lt;/span&gt; sidero-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the servers are accepted, they can be seen allocated to serverclasses, but they are still not "IN USE":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get serverclass &lt;span class="nt"&gt;-A&lt;/span&gt;
NAME      AVAILABLE                                                                                                                                                                                                                                                                                                                   IN USE   AGE
any       &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"13f56641-ff59-467c-94df-55a2861146d9"&lt;/span&gt;,&lt;span class="s2"&gt;"26f39da5-c622-42e0-b160-ff0eb58eb56b"&lt;/span&gt;,&lt;span class="s2"&gt;"4e56c769-8a35-4a68-b90b-0e1dca530fb0"&lt;/span&gt;,&lt;span class="s2"&gt;"5211912d-8f32-4ea4-8738-aaff57386391"&lt;/span&gt;,&lt;span class="s2"&gt;"a0b83613-faa9-468a-9289-1aa270117d54"&lt;/span&gt;,&lt;span class="s2"&gt;"a6c8afca-15b9-4254-82b4-91bbcd76dba0"&lt;/span&gt;,&lt;span class="s2"&gt;"ec39bf0e-632d-4dca-9ae0-0b3509368de6"&lt;/span&gt;,&lt;span class="s2"&gt;"f426397a-76ff-4ea6-815a-2be97265f5e6"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;   &lt;span class="o"&gt;[]&lt;/span&gt;       21m
masters   &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"4e56c769-8a35-4a68-b90b-0e1dca530fb0"&lt;/span&gt;,&lt;span class="s2"&gt;"ec39bf0e-632d-4dca-9ae0-0b3509368de6"&lt;/span&gt;,&lt;span class="s2"&gt;"f426397a-76ff-4ea6-815a-2be97265f5e6"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;                                                                                                                                                                                                      &lt;span class="o"&gt;[]&lt;/span&gt;       3m27s
workers   &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"13f56641-ff59-467c-94df-55a2861146d9"&lt;/span&gt;,&lt;span class="s2"&gt;"26f39da5-c622-42e0-b160-ff0eb58eb56b"&lt;/span&gt;,&lt;span class="s2"&gt;"5211912d-8f32-4ea4-8738-aaff57386391"&lt;/span&gt;,&lt;span class="s2"&gt;"a0b83613-faa9-468a-9289-1aa270117d54"&lt;/span&gt;,&lt;span class="s2"&gt;"a6c8afca-15b9-4254-82b4-91bbcd76dba0"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;                                                                                                                        &lt;span class="o"&gt;[]&lt;/span&gt;       3m27s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While developing config patches it is usually convenient to test generated config with patches before actual server is provisioned with the config.&lt;/p&gt;

&lt;p&gt;This can be achieved by querying the metadata server endpoint directly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://&lt;span class="nv"&gt;$PUBLIC_IP&lt;/span&gt;:8081/configdata?uuid&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_UUID&lt;/span&gt; &lt;span class="c"&gt;# example "http://10.1.1.18:8081/configdata?uuid=4e56c769-8a35-4a68-b90b-0e1dca530fb0"&lt;/span&gt;
version: v1alpha1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Create a Workload Cluster
&lt;/h4&gt;

&lt;p&gt;We are now ready to generate the configuration manifest templates for our first workload cluster.&lt;/p&gt;

&lt;p&gt;There are several configuration parameters that should be set in order for the templating to work properly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CONTROL_PLANE_SERVERCLASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;masters
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;WORKER_SERVERCLASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;workers
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TALOS_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1.8.3
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;KUBERNETES_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;v1.31.2
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CONTROL_PLANE_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6443
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CONTROL_PLANE_ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.1.1.50

clusterctl generate cluster cluster-0 &lt;span class="nt"&gt;-i&lt;/span&gt; sidero &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; cluster-0.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the pain points when building a high-availability controlplane is giving clients a single IP or URL at which they can reach any of the controlplane nodes. The most common approaches - reverse proxy, load balancer, BGP, and DNS - all require external resources, and add complexity in setting up Kubernetes.&lt;/p&gt;

&lt;p&gt;To simplify cluster creation, Talos Linux supports a “Virtual” IP (VIP) address to access the Kubernetes API server, providing high availability with no other resources required.&lt;/p&gt;

&lt;p&gt;For cluster endpoint we use &lt;code&gt;10.1.1.50&lt;/code&gt; ip address, which will be our share Virtual IP address. We can actually set it in ServerClass:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;configPatches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;op&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;add&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/machine/network/interfaces&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deviceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;busPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0*"&lt;/span&gt; &lt;span class="c1"&gt;# any network device&lt;/span&gt;
           &lt;span class="na"&gt;dhcp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
           &lt;span class="na"&gt;vip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;ip&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.1.1.50"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The yaml manifest for cluster-0.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster.x-k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cluster&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterNetwork&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;pods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cidrBlocks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.244.0.0/16&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;cidrBlocks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;10.96.0.0/12&lt;/span&gt;
  &lt;span class="na"&gt;controlPlaneRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controlplane.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TalosControlPlane&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-cp&lt;/span&gt;
  &lt;span class="na"&gt;infrastructureRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MetalCluster&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MetalCluster&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;controlPlaneEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.1.1.50&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6443&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MetalMachineTemplate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-cp&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serverClassRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metal.sidero.dev/v1alpha2&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServerClass&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;masters&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controlplane.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TalosControlPlane&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-cp&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;controlPlaneConfig&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;controlplane&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;generateType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;controlplane&lt;/span&gt;
      &lt;span class="na"&gt;talosVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1.8.3&lt;/span&gt;
  &lt;span class="na"&gt;infrastructureTemplate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MetalMachineTemplate&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-cp&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1.31.2&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bootstrap.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TalosConfigTemplate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-workers&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;generateType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;join&lt;/span&gt;
      &lt;span class="na"&gt;talosVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1.8.3&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster.x-k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MachineDeployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-workers&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;bootstrap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;configRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bootstrap.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
          &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TalosConfigTemplate&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-workers&lt;/span&gt;
      &lt;span class="na"&gt;clusterName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0&lt;/span&gt;
      &lt;span class="na"&gt;infrastructureRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MetalMachineTemplate&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-workers&lt;/span&gt;
      &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1.31.2&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infrastructure.cluster.x-k8s.io/v1alpha3&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MetalMachineTemplate&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-0-workers&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serverClassRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metal.sidero.dev/v1alpha2&lt;/span&gt;
        &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ServerClass&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;workers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you are satisfied with your configuration, go ahead and apply it to Sidero:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; cluster-0.yaml
cluster.cluster.x-k8s.io/cluster-0 created
metalcluster.infrastructure.cluster.x-k8s.io/cluster-0 created
metalmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-0-cp created
taloscontrolplane.controlplane.cluster.x-k8s.io/cluster-0-cp created
talosconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-0-workers created
machinedeployment.cluster.x-k8s.io/cluster-0-workers created
metalmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-0-workers created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, Sidero will allocate Servers according to the requests in the cluster manifest. Once allocated, each of those machines will be installed with Talos, given their configuration, and form a cluster.&lt;/p&gt;

&lt;p&gt;You can watch the progress of the Servers being selected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get servers,machines,clusters
NAME                                                           HOSTNAME   ACCEPTED   CORDONED   ALLOCATED   CLEAN   POWER   AGE
server.metal.sidero.dev/0859ab1b-32d1-4acc-bb91-74c4eafe8017   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      29m
server.metal.sidero.dev/25ba21df-363c-4345-bbb6-b2f21487d103   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      29m
server.metal.sidero.dev/6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      29m
server.metal.sidero.dev/992f741b-fc3a-48e4-814b-c2f351b320eb   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      29m
server.metal.sidero.dev/995d057f-4f61-4359-8e78-9a043904fe3a   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      29m
server.metal.sidero.dev/9f909243-9333-461c-bcd3-dce874d5c36a   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      29m
server.metal.sidero.dev/d6141070-cb35-47d7-8f12-447ee936382a   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                              true    &lt;/span&gt;on      29m

NAME                                          CLUSTER     NODENAME        PROVIDERID                                      PHASE     AGE   VERSION
machine.cluster.x-k8s.io/cluster-0-cp-89hsd   cluster-0   talos-rlj-gk7   sidero://0859ab1b-32d1-4acc-bb91-74c4eafe8017   Running   21m   v1.31.2

NAME                                 CLUSTERCLASS   PHASE         AGE   VERSION
cluster.cluster.x-k8s.io/cluster-0                  Provisioned   21m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During the Provisioning phase, a Server will become allocated, the hardware will be powered up, Talos will be installed onto it, and it will be rebooted into Talos. Depending on the hardware involved, this may take several minutes. Currently, we can see that only 1 server is allocated because in cluster-0.yaml manifest we specified only 1 replica for control plane and 0 replicas for workers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Retrieve the talosconfig &amp;amp; kubeconfig
&lt;/h4&gt;

&lt;p&gt;In order to interact with the new machines (outside of Kubernetes), you will need to obtain the talosctl client configuration, or talosconfig. You can do this by retrieving the secret from the Sidero management cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get talosconfig &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get talosconfig &lt;span class="nt"&gt;--no-headers&lt;/span&gt; | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'NR==1{print $1}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.status.talosConfig}'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; talosconfig

kubectl describe server 0859ab1b-32d1-4acc-bb91-74c4eafe8017 | &lt;span class="nb"&gt;grep &lt;/span&gt;Address
        Addresses:
  Addresses:
    Address:  10.1.1.5

talosctl &lt;span class="nt"&gt;--talosconfig&lt;/span&gt; talosconfig &lt;span class="nt"&gt;-n&lt;/span&gt; 10.1.1.5 &lt;span class="nt"&gt;-e&lt;/span&gt; 10.1.1.5 kubeconfig &lt;span class="nt"&gt;-f&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Check access and scale the cluster
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k get node
NAME            STATUS     ROLES           AGE   VERSION
talos-rlj-gk7   NotReady   control-plane   20m   v1.31.2

k get po &lt;span class="nt"&gt;-A&lt;/span&gt;
NAMESPACE     NAME                                             READY   STATUS    RESTARTS      AGE
kube-system   coredns-68d75fd545-6rxp7                         0/1     Pending   0             20m
kube-system   coredns-68d75fd545-h9xrx                         0/1     Pending   0             20m
kube-system   kube-apiserver-talos-rlj-gk7                     1/1     Running   0             19m
kube-system   kube-controller-manager-talos-rlj-gk7            1/1     Running   2 &lt;span class="o"&gt;(&lt;/span&gt;20m ago&lt;span class="o"&gt;)&lt;/span&gt;   18m
kube-system   kube-scheduler-talos-rlj-gk7                     1/1     Running   2 &lt;span class="o"&gt;(&lt;/span&gt;20m ago&lt;span class="o"&gt;)&lt;/span&gt;   18m
kube-system   metrics-server-54bf7cdd6-tlhg5                   0/1     Pending   0             20m
kube-system   talos-cloud-controller-manager-df65c8444-47w49   0/1     Pending   0             20m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Node is in NotReady status because we do not have CNI installed. Let's install cilium:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;ipam&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes&lt;/span&gt;

&lt;span class="na"&gt;k8sServiceHost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
&lt;span class="na"&gt;k8sServicePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;7445&lt;/span&gt;

&lt;span class="na"&gt;kubeProxyReplacement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;installNoConntrackIptablesRules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;enableK8sEndpointSlice&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;localRedirectPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;healthChecking&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;routingMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;native&lt;/span&gt;
&lt;span class="na"&gt;autoDirectNodeRoutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;ipv4NativeRoutingCIDR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10.244.0.0/16&lt;/span&gt;

&lt;span class="na"&gt;loadBalancer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hybrid&lt;/span&gt;
  &lt;span class="na"&gt;algorithm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;maglev&lt;/span&gt;
  &lt;span class="na"&gt;acceleration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;best-effort&lt;/span&gt;
  &lt;span class="na"&gt;serviceTopology&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;bpf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;masquerade&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;ipv4&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;hostServices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;hostPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;externalIPs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;hostFirewall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;ingressController&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;envoy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="na"&gt;hubble&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;rollOutPods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;prometheus&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;node-role.kubernetes.io/control-plane&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;operator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Exists&lt;/span&gt;
      &lt;span class="na"&gt;effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NoSchedule&lt;/span&gt;

&lt;span class="na"&gt;cgroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;autoMount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;hostRoot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/sys/fs/cgroup&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;4Gi&lt;/span&gt;
  &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;100m&lt;/span&gt;
    &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;128Mi&lt;/span&gt;

&lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capabilities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ciliumAgent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CHOWN&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;KILL&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NET_ADMIN&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NET_RAW&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;IPC_LOCK&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SYS_ADMIN&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SYS_RESOURCE&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;DAC_OVERRIDE&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;FOWNER&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SETGID&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SETUID&lt;/span&gt;
    &lt;span class="na"&gt;cleanCiliumState&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NET_ADMIN&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SYS_ADMIN&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;SYS_RESOURCE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Same command to install cilium&lt;/span&gt;
cilium &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--version&lt;/span&gt; 1.16.4 &lt;span class="nt"&gt;-f&lt;/span&gt; cilium.yaml &lt;span class="nt"&gt;-n&lt;/span&gt; cilium
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have more machines available, we can scale both the controlplane (TalosControlPlane) and the workers (MachineDeployment) for any cluster after it has been deployed. This is done just like normal Kubernetes Deployments.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get TalosControlPlane
NAME           READY   INITIALIZED   REPLICAS   READY REPLICAS   UNAVAILABLE REPLICAS
cluster-0-cp   &lt;span class="nb"&gt;true    true          &lt;/span&gt;1          1

kubectl get MachineDeployment
NAME                CLUSTER     REPLICAS   READY   UPDATED   UNAVAILABLE   PHASE     AGE   VERSION
cluster-0-workers   cluster-0                                              Running   80m   v1.31.2


kubectl scale taloscontrolplane cluster-0-cp &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
taloscontrolplane.controlplane.cluster.x-k8s.io/cluster-0-cp scaled

kubectl scale MachineDeployment cluster-0-workers &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4
machinedeployment.cluster.x-k8s.io/cluster-0-workers scaled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can see that all of our nodes are "IN USE" in serverclass:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get serverclass
NAME      AVAILABLE   IN USE                                                                                                                                                                                                                                                                               AGE
any       &lt;span class="o"&gt;[]&lt;/span&gt;          &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0859ab1b-32d1-4acc-bb91-74c4eafe8017"&lt;/span&gt;,&lt;span class="s2"&gt;"25ba21df-363c-4345-bbb6-b2f21487d103"&lt;/span&gt;,&lt;span class="s2"&gt;"6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6"&lt;/span&gt;,&lt;span class="s2"&gt;"992f741b-fc3a-48e4-814b-c2f351b320eb"&lt;/span&gt;,&lt;span class="s2"&gt;"995d057f-4f61-4359-8e78-9a043904fe3a"&lt;/span&gt;,&lt;span class="s2"&gt;"9f909243-9333-461c-bcd3-dce874d5c36a"&lt;/span&gt;,&lt;span class="s2"&gt;"d6141070-cb35-47d7-8f12-447ee936382a"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;   101m
masters   &lt;span class="o"&gt;[]&lt;/span&gt;          &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0859ab1b-32d1-4acc-bb91-74c4eafe8017"&lt;/span&gt;,&lt;span class="s2"&gt;"6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6"&lt;/span&gt;,&lt;span class="s2"&gt;"995d057f-4f61-4359-8e78-9a043904fe3a"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;                                                                                                                                                               98m
workers   &lt;span class="o"&gt;[]&lt;/span&gt;          &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"25ba21df-363c-4345-bbb6-b2f21487d103"&lt;/span&gt;,&lt;span class="s2"&gt;"992f741b-fc3a-48e4-814b-c2f351b320eb"&lt;/span&gt;,&lt;span class="s2"&gt;"9f909243-9333-461c-bcd3-dce874d5c36a"&lt;/span&gt;,&lt;span class="s2"&gt;"d6141070-cb35-47d7-8f12-447ee936382a"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;                                                                                                                        98m                                                                               &lt;span class="o"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&amp;amp;&amp;amp;&amp;amp;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k get servers,machines,clusters
NAME                                                           HOSTNAME   ACCEPTED   CORDONED   ALLOCATED   CLEAN   POWER   AGE
server.metal.sidero.dev/0859ab1b-32d1-4acc-bb91-74c4eafe8017   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m
server.metal.sidero.dev/25ba21df-363c-4345-bbb6-b2f21487d103   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m
server.metal.sidero.dev/6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m
server.metal.sidero.dev/992f741b-fc3a-48e4-814b-c2f351b320eb   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m
server.metal.sidero.dev/995d057f-4f61-4359-8e78-9a043904fe3a   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m
server.metal.sidero.dev/9f909243-9333-461c-bcd3-dce874d5c36a   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m
server.metal.sidero.dev/d6141070-cb35-47d7-8f12-447ee936382a   &lt;span class="o"&gt;(&lt;/span&gt;none&lt;span class="o"&gt;)&lt;/span&gt;     &lt;span class="nb"&gt;true                  true        false   &lt;/span&gt;on      99m

NAME                                                     CLUSTER     NODENAME        PROVIDERID                                      PHASE     AGE     VERSION
machine.cluster.x-k8s.io/cluster-0-cp-22d6n              cluster-0   talos-m0p-gke   sidero://995d057f-4f61-4359-8e78-9a043904fe3a   Running   10m     v1.31.2
machine.cluster.x-k8s.io/cluster-0-cp-544tn              cluster-0   talos-5le-4ou   sidero://6c9aa61c-b9c6-4b69-8b77-8a46b5f217c6   Running   10m     v1.31.2
machine.cluster.x-k8s.io/cluster-0-cp-89hsd              cluster-0   talos-rlj-gk7   sidero://0859ab1b-32d1-4acc-bb91-74c4eafe8017   Running   91m     v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-8brp6   cluster-0   talos-rvd-w8m   sidero://9f909243-9333-461c-bcd3-dce874d5c36a   Running   9m52s   v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-hklgh   cluster-0   talos-aps-1nj   sidero://d6141070-cb35-47d7-8f12-447ee936382a   Running   9m52s   v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-krwxw   cluster-0   talos-72m-mif   sidero://25ba21df-363c-4345-bbb6-b2f21487d103   Running   9m53s   v1.31.2
machine.cluster.x-k8s.io/cluster-0-workers-lgphx-xs6fm   cluster-0   talos-9ww-ia0   sidero://992f741b-fc3a-48e4-814b-c2f351b320eb   Running   9m52s   v1.31.2

NAME                                 CLUSTERCLASS   PHASE         AGE   VERSION
cluster.cluster.x-k8s.io/cluster-0                  Provisioned   91m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the workload cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k get node
NAME            STATUS   ROLES           AGE     VERSION
talos-5le-4ou   Ready    control-plane   9m31s   v1.31.2
talos-72m-mif   Ready    &amp;lt;none&amp;gt;          9m11s   v1.31.2
talos-9ww-ia0   Ready    &amp;lt;none&amp;gt;          9m37s   v1.31.2
talos-aps-1nj   Ready    &amp;lt;none&amp;gt;          9m6s    v1.31.2
talos-m0p-gke   Ready    control-plane   9m29s   v1.31.2
talos-rlj-gk7   Ready    control-plane   89m     v1.31.2
talos-rvd-w8m   Ready    &amp;lt;none&amp;gt;          9m42s   v1.31.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>kubernetes</category>
      <category>proxmox</category>
      <category>talos</category>
      <category>sidero</category>
    </item>
    <item>
      <title>Quickly Prepare Your Tooling with Nixery.dev: No Dockerfile Needed</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:46:40 +0000</pubDate>
      <link>https://dev.to/bnovickovs/quickly-prepare-your-tooling-with-nixerydev-no-dockerfile-needed-596b</link>
      <guid>https://dev.to/bnovickovs/quickly-prepare-your-tooling-with-nixerydev-no-dockerfile-needed-596b</guid>
      <description>&lt;p&gt;If you’ve ever been too lazy to write a proper Dockerfile to set up your environment, Nixery.dev has got you covered. It allows you to quickly assemble containers with the exact tools you need, without the hassle of creating and managing a Dockerfile.&lt;/p&gt;

&lt;p&gt;Let’s say you need an environment with kubectl, jq, yq, and helm all in place. Instead of writing a Dockerfile or manually installing each tool, you can get a pre-configured image in no time using Nixery.dev.&lt;/p&gt;

&lt;p&gt;With Nixery, all you need to do is specify the tools you need in the URL. For example, to get a shell with kubectl, jq, yq, and helm, just use this URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; nixery.dev/shell/kubectl/jq/yq/helm

Unable to find image &lt;span class="s1"&gt;'nixery.dev/shell/kubectl/jq/yq/helm:latest'&lt;/span&gt; locally
latest: Pulling from shell/kubectl/jq/yq/helm
7bd0d820be49: Already exists
cdd7895d7577: Already exists
de6cb50335aa: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;11720475aae7: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;b8ff367d8a2d: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;f6ff9498ab9c: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;1d17c684d513: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;913f63e3401f: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;14a855609149: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;3d5094a0f4f3: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;e90094fbdef4: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;7e3feab0b197: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;262f49f707da: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;9a33ef4edb1b: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;2b2ece68abe5: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;505ed7a4fa00: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;f68eb5b9e907: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;7d2670c677e3: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;b5eceb4adf47: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;d0f6822fe6ec: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;90548e6cb522: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;c06288390ee6: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;a6592313490f: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;a7172b4ea41b: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;4e6f2304a814: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;a90445f26e1d: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;ff7b4358edf4: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;cff643889033: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;046661f70676: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;fa94a058e6b0: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;49d2ea9475ef: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;b326b3073570: Pull &lt;span class="nb"&gt;complete
&lt;/span&gt;Digest: sha256:da377dc6d7e9091fbc30eecb8aeee5dd89aef6571fba018594c68d581a2f911e
Status: Downloaded newer image &lt;span class="k"&gt;for &lt;/span&gt;nixery.dev/shell/kubectl/jq/yq/helm:latest

bash-5.2# helm &lt;span class="nt"&gt;--version&lt;/span&gt;
Helm 0.9.0
bash-5.2# jq &lt;span class="nt"&gt;--version&lt;/span&gt;
jq-1.7.1
bash-5.2# yq &lt;span class="nt"&gt;--version&lt;/span&gt;
yq 3.4.3
bash-5.2# kubectl version
Client Version: v1.32.1
Kustomize Version: v5.5.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will give you a shell with all the necessary tools pre-installed, ready to use right away.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quick and Easy: No need for Dockerfile management or custom setup scripts.&lt;/li&gt;
&lt;li&gt;Customizable: Add only the tools you need by simply modifying the URL.&lt;/li&gt;
&lt;li&gt;Lightweight: Avoid creating large, bloated images with unnecessary dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next time you're in a hurry, or just feeling a bit too lazy to create a Dockerfile, Nixery.dev is a great tool to have in your back pocket!&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>dockerfile</category>
      <category>nixery</category>
    </item>
    <item>
      <title>Kubernetes As a Service (KAAS) in Proxmox using Talos</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:38:10 +0000</pubDate>
      <link>https://dev.to/bnovickovs/kubernetes-as-a-service-kaas-in-proxmox-using-talos-3egh</link>
      <guid>https://dev.to/bnovickovs/kubernetes-as-a-service-kaas-in-proxmox-using-talos-3egh</guid>
      <description>&lt;p&gt;The Talos-Proxmox-KaaS repository demonstrates how to use Talos Linux, Sidero (CAPI), FluxCD, and Proxmox Operator to provision Kubernetes clusters in a GitOps-driven environment. This setup integrates various tools such as Talos, Proxmox, Cilium, and FluxCD for seamless cluster management and deployment.&lt;/p&gt;

&lt;p&gt;Key Features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proxmox &amp;amp; Talos Integration: Using Terraform to provision and manage clusters.&lt;/li&gt;
&lt;li&gt;GitOps: Automates Kubernetes application management with FluxCD.&lt;/li&gt;
&lt;li&gt;Infrastructure Automation: Automates provisioning and deployment using tools like Packer, Terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/kubebn/talos-proxmox-kaas" rel="noopener noreferrer"&gt;https://github.com/kubebn/talos-proxmox-kaas&lt;/a&gt;&lt;/p&gt;

</description>
      <category>proxmox</category>
      <category>kubernetes</category>
      <category>fluxcd</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Managing Kiali Instance Finalizers During Helm Chart Uninstallation in Kiali-Operator</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:31:47 +0000</pubDate>
      <link>https://dev.to/bnovickovs/managing-kiali-instance-finalizers-during-helm-chart-uninstallation-in-kiali-operator-4ga2</link>
      <guid>https://dev.to/bnovickovs/managing-kiali-instance-finalizers-during-helm-chart-uninstallation-in-kiali-operator-4ga2</guid>
      <description>&lt;p&gt;When deploying the Kiali-operator using the Helm chart from &lt;a href="https://github.com/kiali/helm-charts/tree/master/kiali-operator" rel="noopener noreferrer"&gt;Kiali's GitHub repository&lt;/a&gt;, you might configure it to create a Kiali instance after the operator is deployed. In such cases, you may use the Custom Resource (CR) value provided in the Helm chart: &lt;a href="https://github.com/kiali/helm-charts/blob/56009cff505e71c21d9449726827c85999f8a3f7/kiali-operator/values.yaml#L92" rel="noopener noreferrer"&gt;Kiali-Operator Values.yaml&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;However, an issue arises when you try to uninstall the Kiali-operator Helm chart. The Kiali instance will have a finalizer attached to it, which prevents its deletion. This results in the successful uninstallation of the operator, but the Kiali instance remains running in your cluster.&lt;/p&gt;

&lt;p&gt;To resolve this, you have two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Manually avoid using the CR object in the Helm values and create your own Kiali instance manifest. This way, you can manage the Kiali instance directly, avoiding the Helm chart's default behavior.&lt;/li&gt;
&lt;li&gt;Automate the deletion of the Kiali instance by adding a Helm hook job that will run before the operator is uninstalled. This job will patch the Kiali instance to remove the finalizer and then delete the instance, ensuring the operator is fully removed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s how you can define a Helm hook job to handle this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Job&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;delete-kiali-cr"&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Release.Namespace&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kiali-operator"&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;helm.sh/hook"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pre-delete&lt;/span&gt;
    &lt;span class="s"&gt;"helm.sh/hook-weight"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-1"&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;helm.sh/hook-delete-policy"&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;before-hook-creation,hook-succeeded&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;include "kiali-operator.fullname" .&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;serviceAccountName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;include "kiali-operator.fullname" .&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;  &lt;span class="c1"&gt;# Use the service account defined above&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;kubectl"&lt;/span&gt;
          &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bitnami/kubectl:latest"&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-c"&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
              &lt;span class="s"&gt;kubectl patch kiali kiali -p '{"metadata":{"finalizers":null}}' --type=merge -n "{{ .Values.cr.namespace }}"&lt;/span&gt;
              &lt;span class="s"&gt;kubectl delete kiali kiali -n "{{ .Values.cr.namespace }}"&lt;/span&gt;
      &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
      &lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Values.tolerations&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;tolerations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- toYaml .Values.tolerations | nindent 8&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- if .Values.nodeSelector&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="na"&gt;nodeSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- toYaml .Values.nodeSelector | nindent 8&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
      &lt;span class="pi"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt;- end&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this post, we explore the challenges that arise when deploying the Kiali-operator with Helm and creating a Kiali instance, especially when uninstalling the operator. We’ll walk through two solutions for dealing with the Kiali instance finalizer issue and ensuring smooth removal of the operator and associated resources.&lt;/p&gt;

</description>
      <category>kiali</category>
      <category>istio</category>
      <category>helm</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>k3s Pull Through Image Cache</title>
      <dc:creator>Boriss V</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:06:00 +0000</pubDate>
      <link>https://dev.to/bnovickovs/k3s-pull-through-image-cache-49h5</link>
      <guid>https://dev.to/bnovickovs/k3s-pull-through-image-cache-49h5</guid>
      <description>&lt;p&gt;When running K3s locally, pulling images from container registries can take a significant amount of time. To address this, we set up local caching pass-through registries to store images and configure the local K3s cluster to use these proxies. A similar method can be employed in production environments, particularly in air-gapped setups. This approach can also be used to ensure that all necessary images are available in local registries. It also helps overcome issues with Docker Hub rate limits.&lt;/p&gt;

&lt;p&gt;Regarding &lt;a href="https://docs.k3s.io/installation/private-registry" rel="noopener noreferrer"&gt;the guide on private registries for K3s&lt;/a&gt;, it provides a useful overview. However, it doesn't go into detail about using one of the most popular open-source registries, Harbor. In this post, I will explain how this can be done.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa11hscpqwnl1rkhjpde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa11hscpqwnl1rkhjpde.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniffvtl5d5f5lx9ql7uj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fniffvtl5d5f5lx9ql7uj.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whenever the Harbor registry is configured as shown in the screenshots above (you can also use Terraform for this if needed), it's time to configure containerd for K3s. Ensure that your registry is configured for HTTPS with the following settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# https related config&lt;/span&gt;
https:
  &lt;span class="c"&gt;# https port for harbor, default is 443&lt;/span&gt;
  port: 443
  &lt;span class="c"&gt;# The path of cert and key files for nginx&lt;/span&gt;
  certificate: /etc/letsencrypt/live/.../fullchain.pem
  private_key: /etc/letsencrypt/live/.../privkey.pem&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure containerd mirrors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;cat registries.yaml&lt;/span&gt;

&lt;span class="na"&gt;mirrors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;docker.io&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://your-registry-com:443/v2/proxy-docker.io&lt;/span&gt;
  &lt;span class="na"&gt;ghcr.io&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://your-registry-com:443/v2/proxy-ghcr.io&lt;/span&gt;
  &lt;span class="na"&gt;gcr.io&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://your-registry-com:443/v2/proxy-gcr.io&lt;/span&gt;
  &lt;span class="na"&gt;registry.k8s.io&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://your-registry-com:443/v2/proxy-registry.k8s.io&lt;/span&gt;
  &lt;span class="na"&gt;quay.io&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://your-registry-com:443/v2/proxy-quay.io&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This file should be placed in the K3s folder before starting the cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/rancher/k3s/

&lt;span class="nb"&gt;cp &lt;/span&gt;registries.yaml /etc/rancher/k3s/registries.yaml

curl &lt;span class="nt"&gt;-sfL&lt;/span&gt; https://get.k3s.io | &lt;span class="nv"&gt;INSTALL_K3S_EXEC&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'...'&lt;/span&gt; &lt;span class="nv"&gt;K3S_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;... sh -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By following the steps outlined in this guide, you'll ensure that your K3s cluster is efficiently pulling images from your local registry, reducing latency and increasing reliability. &lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.talos.dev/v1.9/talos-guides/configuration/pull-through-cache/" rel="noopener noreferrer"&gt;https://www.talos.dev/v1.9/talos-guides/configuration/pull-through-cache/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>k3s</category>
      <category>kubernetes</category>
      <category>containers</category>
      <category>harbor</category>
    </item>
  </channel>
</rss>
