<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Reza</title>
    <description>The latest articles on DEV Community by Reza (@frozenprocess).</description>
    <link>https://dev.to/frozenprocess</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/frozenprocess"/>
    <language>en</language>
    <item>
      <title>[Boost]</title>
      <dc:creator>Reza</dc:creator>
      <pubDate>Tue, 06 Jan 2026 18:16:24 +0000</pubDate>
      <link>https://dev.to/frozenprocess/-471m</link>
      <guid>https://dev.to/frozenprocess/-471m</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/frozenprocess" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3600249%2Fd38b58d3-1072-44a9-b1ed-a303273d1935.jpeg" alt="frozenprocess"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/frozenprocess/ipvs-to-nftables-a-migration-guide-for-kubernetes-v135-24m5" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;IPVS to NFTables: A Migration Guide for Kubernetes v1.35&lt;/h2&gt;
      &lt;h3&gt;Reza ・ Jan 5&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#cloudnative&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubeadm&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#containers&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>kubeadm</category>
      <category>containers</category>
    </item>
    <item>
      <title>IPVS to NFTables: A Migration Guide for Kubernetes v1.35</title>
      <dc:creator>Reza</dc:creator>
      <pubDate>Mon, 05 Jan 2026 23:29:00 +0000</pubDate>
      <link>https://dev.to/frozenprocess/ipvs-to-nftables-a-migration-guide-for-kubernetes-v135-24m5</link>
      <guid>https://dev.to/frozenprocess/ipvs-to-nftables-a-migration-guide-for-kubernetes-v135-24m5</guid>
      <description>&lt;p&gt;Project Calico has a unique design with a pluggable dataplane that allows users to have the freedom to choose the right networking backend for their environment. In fact Calico supports a wide range of technologies, including eBPF, iptables, IPVS, Windows HNS, and VPP, ensuring the Kubernetes community is always equipped with the latest capabilities.&lt;/p&gt;

&lt;p&gt;In 2019, &lt;a href="https://www.tigera.io/blog/comparing-kube-proxy-modes-iptables-or-ipvs/" rel="noopener noreferrer"&gt;Tigera introduced&lt;/a&gt; support for the Linux IPVS mode into the Calico backends as an alternative to iptables. Its primary goal was to handle service creation more efficiently and offer better performance for large-scale clusters. However, the landscape has changed significantly since then and with the introduction of Kubernetes v1.35, IPVS mode is being deprecated from the kube-proxy in favor of the modern Linux standard NFTables.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Should You Migrate?
&lt;/h2&gt;

&lt;p&gt;To understand why this shift is happening, we need to look at the evolution of the Linux networking Stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Iptables:&lt;/strong&gt; While highly reliable, iptables suffered from a Global Lock Bottleneck and O(N) complexity this means in large clusters, a simple rule update required reloading the entire ruleset, causing massive latency and high CPU usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Duct Tape (IPVS):&lt;/strong&gt; IPVS was adopted to solve these scaling issues. It offered O(1) matching performance using hashmaps, making it much faster for services. However, it required maintaining a completely separate kernel subsystem, creating significant technical debt and a lot of work to achieve feature parity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The Future (NFTables):&lt;/strong&gt; NFTables is the modern successor to both its predecessors. It combines the performance benefits of IPVS (fast, scalable packet classification) with the flexibility of iptables, all within a unified, modern kernel API.
Learn more about this shift &lt;a href="https://youtu.be/yOGHb2HjslY?si=a6KSgls7Xo0OsbgF" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;NFtables doesn’t have too many requirements and by now it should be covered by most Linux distributions. Here is a short list of things that you should know before attempting to migrate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Linux Kernel:&lt;/strong&gt; Your Linux kernel should be compiled with nftables support.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes:&lt;/strong&gt; v1.31 or later&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Calico:&lt;/strong&gt; v3.30 or later. This guide assumes Calico is already running in your cluster. If you’d like to learn how to install Calico in your environment, click &lt;a href="https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️It is recommended to perform networking backend change during a maintenance window. ⚠️&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify The Current Mode
&lt;/h3&gt;

&lt;p&gt;To confirm if your cluster is currently in IPVS mode, check the kube-proxy logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system daemonset/kube-proxy | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; ipvs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;I0103 01:18:49.979100 1 server_linux.go:253] &lt;span class="s2"&gt;"Using ipvs Proxier"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In Kubernetes v1.35+, you will also see this deprecation log:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="s2"&gt;"The ipvs proxier is now deprecated and may be removed in a future release. Please use 'nftables' instead."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your environment is set to IPVS then Calico automatically switches to its IPVS mode and utilizes IPVS based service creation to gain better performance.&lt;br&gt;
You can verify this by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; calico-system daemonset/calico-node | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; ipvs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;2026-01-03 03:09:52.996 &lt;span class="o"&gt;[&lt;/span&gt;INFO][71] felix/driver.go 85: Kube-proxy &lt;span class="k"&gt;in &lt;/span&gt;ipvs mode, enabling felix kube-proxy ipvs support.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Migrate Kube-Proxy to NFTables
&lt;/h2&gt;

&lt;p&gt;As shown in the previous log emitted by kube-proxy, the upstream Kubernetes recommendation is to switch from IPVS to nftables.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update the ConfigMap
&lt;/h3&gt;

&lt;p&gt;You need to update the mode parameter in the kube-proxy ConfigMap.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl edit configmap &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system kube-proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Locate the mode configuration (usually found within the config.conf data block) and change it from ipvs to nftables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mode: nftables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Restart Kube-Proxy
&lt;/h2&gt;

&lt;p&gt;Changes to the ConfigMap do not apply automatically. You must restart the DaemonSet to pick up the changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout restart &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system daemonset/kube-proxy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verify Kube-Proxy Migration
&lt;/h2&gt;

&lt;p&gt;Once the pods restart, check the logs to confirm the new mode is active:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system daemonset/kube-proxy | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; nftables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Switch Calico to NFTables
&lt;/h2&gt;

&lt;p&gt;After updating kube-proxy, you must instruct the Calico dataplane to switch to NFTables mode. This is done by patching the Tigera Operator's installation resource.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Patch the Installation
&lt;/h3&gt;

&lt;p&gt;Run the following command to update the Linux dataplane mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch installation default &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;merge &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'{"spec":{"calicoNetwork":{"linuxDataplane":"Nftables"}}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Verify Calico Migration
&lt;/h3&gt;

&lt;p&gt;The Tigera operator will initiate a rolling restart of all calico-node pods. Once complete, verify the change in the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; calico-system daemonset/calico-node | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; nftables
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;2026-01-03 01:25:07.803 &lt;span class="o"&gt;[&lt;/span&gt;INFO][837] felix/config_params.go 805: Parsed value &lt;span class="k"&gt;for &lt;/span&gt;NFTablesMode: Enabled &lt;span class="o"&gt;(&lt;/span&gt;from datastore &lt;span class="o"&gt;(&lt;/span&gt;global&lt;span class="o"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Switch to Calico eBPF (High Performance)
&lt;/h2&gt;

&lt;p&gt;If you are already performing a migration, consider skipping NFTables entirely and moving to the Calico eBPF dataplane.&lt;br&gt;
The eBPF dataplane bypasses kube-proxy entirely, offering:&lt;br&gt;
Lower latency than both IPVS and NFTables.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source IP preservation.&lt;/li&gt;
&lt;li&gt;Direct Server Return (DSR) capabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Make sure to change your &lt;strong&gt;kube-proxy&lt;/strong&gt; mode to &lt;strong&gt;iptables&lt;/strong&gt; before switching to eBPF.&lt;/p&gt;

&lt;p&gt;Learn more about the Calico eBPF &lt;a href="https://docs.tigera.io/calico/latest/operations/ebpf/enabling-ebpf" rel="noopener noreferrer"&gt;dataplane here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By migrating away from IPVS, you eliminate the technical debt associated with a deprecated backend. Whether you choose the standard NFTables route or upgrade to the high-performance Calico eBPF dataplane, the result is a more stable, secure, and future-proof cluster ready for the next generation of Kubernetes networking.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudnative</category>
      <category>kubeadm</category>
      <category>containers</category>
    </item>
    <item>
      <title>Kubernetes Network Observability with Calico (Manifest based)</title>
      <dc:creator>Reza</dc:creator>
      <pubDate>Fri, 07 Nov 2025 04:25:20 +0000</pubDate>
      <link>https://dev.to/frozenprocess/kubernetes-network-observability-with-calico-manifest-based-164</link>
      <guid>https://dev.to/frozenprocess/kubernetes-network-observability-with-calico-manifest-based-164</guid>
      <description>&lt;p&gt;The purpose of this tutorial is to reveal the behind-the-scenes magic that happens when you use the Tigera operator to install Calico.&lt;/p&gt;

&lt;p&gt;Keep in mind that everything described here can be done using the tigera-operator and just two resources. In fact, with the operator installation method (which we recommend), you simply create these two resources, and the tigera-operator handles everything else automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apiVersion: operator.tigera.io/v1
kind: Goldmane
metadata:
  name: default
&lt;span class="nt"&gt;---&lt;/span&gt;
apiVersion: operator.tigera.io/v1
kind: Whisker
metadata:
  name: default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;While this tutorial can be read like your favorite cat scify, if you wish to replicate the same test in your environment there are a bunch of requirements that you need to install/have.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;K3d&lt;/li&gt;
&lt;li&gt;Calico manifest based install v3.31.0&lt;/li&gt;
&lt;li&gt;OpenSSL (To issue and sign certificates)&lt;/li&gt;
&lt;li&gt;Internet&lt;/li&gt;
&lt;li&gt;Star and clone &lt;a href="https://github.com/frozenprocess/whisker-manifest" rel="noopener noreferrer"&gt;this repo&lt;/a&gt; ;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup your cluster
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; This guide was written and tested for Calico v3.31.0, and expects that you are running the same version. In a case that your Calico version might differ you might have to adjust some of the commands and deployments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ok, just like the matrix you have chosen the red pill and now I'm going to show you how deep the rabbit hole goes. For this tutorial we are going to use &lt;code&gt;k3d&lt;/code&gt;, and &lt;code&gt;docker&lt;/code&gt; (hence being a requirement). This will allow us to spin up a &lt;code&gt;k3s&lt;/code&gt; cluster super fast.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;k3d cluster create &lt;span class="se"&gt;\&lt;/span&gt;
  my-calico-manifest-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-s&lt;/span&gt; 1 &lt;span class="nt"&gt;-a&lt;/span&gt; 2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--k3s-arg&lt;/span&gt; &lt;span class="s1"&gt;'--flannel-backend=none@server:*'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--k3s-arg&lt;/span&gt; &lt;span class="s1"&gt;'--disable-network-policy@server:*'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--k3s-arg&lt;/span&gt; &lt;span class="s1"&gt;'--disable=traefik@server:*'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--k3s-arg&lt;/span&gt; &lt;span class="s1"&gt;'--cluster-cidr=192.168.0.0/16@server:*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Calico and Typha in your cluster
&lt;/h2&gt;

&lt;p&gt;Now that our cluster is ready, it’s time to install Calico. This time, we’ll install both Calico and Typha using manifests instead of the operator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.31.0/manifests/calico-typha.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Currently, you need at least one instance of Typha running to use Goldmane and Whisker.&lt;/p&gt;

&lt;p&gt;You can use the following command to verify if &lt;code&gt;calico-node&lt;/code&gt; pods are running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system ds/calico-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep in mind that a manifest-based install requires you—the administrator—to configure every part of Calico. While this approach offers a high degree of flexibility, it’s not without its challenges, such as missing YAML indentations or using values that are deprecated or unsupported in the current version of Calico.&lt;/p&gt;

&lt;p&gt;This is why we recommend using the tigera-operator. It’s a dedicated operator (so good we’re saying it twice!) designed to configure and maintain your Calico installation—so you can focus on what matters most (ahem... vacations).&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating certificates
&lt;/h2&gt;

&lt;p&gt;By default, a manifest-based install does not secure Calico components. It is the administrator’s responsibility to generate a Certificate Authority, issue and sign the necessary certificates, and rotate them on the desired schedule. Keep in mind that all of these steps are automated when using an operator-based installation.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔐 Certificate Authority (CA)
&lt;/h3&gt;

&lt;p&gt;To secure Calico components, we need a Certificate Authority (CA) to issue and sign our certificates. While this could be a publicly trusted CA that issues certificates for general use, we can also create our own internal CA to generate and sign these certificates for our specific needs.&lt;/p&gt;

&lt;p&gt;Use the following command to establish a CA and generate its keys:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl genrsa &lt;span class="nt"&gt;-out&lt;/span&gt; certs/ca.key 2048

openssl req &lt;span class="nt"&gt;-new&lt;/span&gt; &lt;span class="nt"&gt;-x509&lt;/span&gt; &lt;span class="nt"&gt;-key&lt;/span&gt; certs/ca.key &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=tigera-ca-bundle/O=We love Calico/C=US'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-days&lt;/span&gt; 1024 &lt;span class="nt"&gt;-out&lt;/span&gt; certs/ca.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  📥 Generating a Certificate for calico-typha (Typha Server)
&lt;/h3&gt;

&lt;p&gt;Calico Typha is a middleware key-value caching system designed to sit between &lt;code&gt;calico-node&lt;/code&gt; requests and your Kubernetes &lt;code&gt;kube-apiserver&lt;/code&gt;. This setup allows Typha to query the API server once and then serve the cached responses to all &lt;code&gt;calico-node&lt;/code&gt; instances. If any changes occur within your Kubernetes cluster, Typha will trigger a new query to update its cache accordingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Remember the &lt;code&gt;CN=&lt;/code&gt; value used here and in the next step—these are referenced later in the Typha and Calico-node tuning section as part of the certificate verification process.&lt;/p&gt;

&lt;p&gt;Use the following command to generate a certificate request for Typha:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-keyout&lt;/span&gt; certs/typha-server.key &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=typha-server/O=We love Calico/C=US'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-out&lt;/span&gt; certs/typha-server.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following command to sign and generate a certificate for Typha:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-in&lt;/span&gt; certs/typha-server.csr &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-CA&lt;/span&gt; certs/ca.crt &lt;span class="nt"&gt;-CAkey&lt;/span&gt; certs/ca.key &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-out&lt;/span&gt; certs/typha-server.crt &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-days&lt;/span&gt; 1024
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  📥 Generating a Certificate for Calico-Node (Typha Client)
&lt;/h3&gt;

&lt;p&gt;In order to generate a certificate for the calico-node pod, you first need to create a certificate request. This is necessary because your CA must issue and sign all certificates.&lt;/p&gt;

&lt;p&gt;Use the following command to generate a certificate request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-keyout&lt;/span&gt; certs/typha-client.key &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=typha-client/O=We love Calico/C=US'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-out&lt;/span&gt; certs/typha-client.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following command to sign and generate the client certificates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-in&lt;/span&gt; certs/typha-client.csr &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-CA&lt;/span&gt; certs/ca.crt &lt;span class="nt"&gt;-CAkey&lt;/span&gt; certs/ca.key &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-out&lt;/span&gt; certs/typha-client.crt &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-days&lt;/span&gt; 1024
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  📥 Generating Certificates for Goldmane and Whisker
&lt;/h3&gt;

&lt;p&gt;Goldmane and Whisker certificates are a little different. Since these two components handle sensitive information—specifically Network Flow Logs—we need to assign specific roles to their certificates. Additionally, because these certificates will be used to emit flow logs from other workloads (such as calico-node in this example), we must include Subject Alternative Names (SANs) in the certificates.&lt;/p&gt;

&lt;p&gt;Use the following command to generate a certificate &lt;code&gt;request&lt;/code&gt; for Goldmane:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-keyout&lt;/span&gt; certs/goldmane.key &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=goldmane/O=We love Calico/C=US'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-addext&lt;/span&gt; &lt;span class="s2"&gt;"subjectAltName=DNS:goldmane,DNS:goldmane.kube-system,DNS:goldmane.kube-system.svc,DNS:goldmane.kube-system.svc.cluster.local"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-out&lt;/span&gt; certs/goldmane.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following command to generate a certificate &lt;code&gt;request&lt;/code&gt; for Whisker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl req &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-newkey&lt;/span&gt; rsa:2048 &lt;span class="nt"&gt;-nodes&lt;/span&gt; &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-keyout&lt;/span&gt; certs/whisker-backend.key &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-subj&lt;/span&gt; &lt;span class="s1"&gt;'/CN=whisker-backend/O=We love Calico/C=US'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-addext&lt;/span&gt; &lt;span class="s2"&gt;"subjectAltName=DNS:whisker-backend,DNS:whisker-backend.kube-system,DNS:whisker-backend.kube-system.svc,DNS:whisker-backend.kube-system.svc.cluster.local"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-out&lt;/span&gt; certs/whisker-backend.csr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Finishing the Certificate Requests and Signing Certificates with our CA
&lt;/h3&gt;

&lt;p&gt;Great! Now that we have two certificate requests, it's time to sign them with our CA. This allows any workload using these certificates to verify their authenticity.&lt;/p&gt;

&lt;p&gt;Use the following command to generate and sing your Goldmane certificate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-in&lt;/span&gt; certs/goldmane.csr &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-CA&lt;/span&gt; certs/ca.crt &lt;span class="nt"&gt;-CAkey&lt;/span&gt; certs/ca.key &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-out&lt;/span&gt; certs/goldmane.crt &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-days&lt;/span&gt; 1024 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-extensions&lt;/span&gt; v3_req &lt;span class="nt"&gt;-extfile&lt;/span&gt; certs/req-goldmane.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do the same for Whisker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openssl x509 &lt;span class="nt"&gt;-req&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-in&lt;/span&gt; certs/whisker-backend.csr &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-CA&lt;/span&gt; certs/ca.crt &lt;span class="nt"&gt;-CAkey&lt;/span&gt; certs/ca.key &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-out&lt;/span&gt; certs/whisker-backend.crt &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-days&lt;/span&gt; 1024 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-sha256&lt;/span&gt; &lt;span class="nt"&gt;-extensions&lt;/span&gt; v3_req &lt;span class="nt"&gt;-extfile&lt;/span&gt; certs/req-whisker.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Importing Certificates into the cluster
&lt;/h3&gt;

&lt;p&gt;Now that all our certificates are signed and ready, let’s import them into our Kubernetes cluster as ConfigMaps and Secrets. This is a best practice because it allows us to easily update certificates by replacing these ConfigMaps and Secrets.&lt;/p&gt;

&lt;p&gt;Use the following command to import all the Certificates into your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create configmap goldmane-ca-bundle \
  --namespace=kube-system \
  --from-file=ca-bundle.crt=./certs/ca.crt \
  --from-file=tigera-ca-bundle.crt=./certs/ca.crt \
  --dry-run=client -o yaml | kubectl apply -f -

kubectl create secret tls goldmane-key-pair \
  --cert=certs/goldmane.crt \
  --key=certs/goldmane.key \
  --namespace=kube-system \
  --dry-run=client -o yaml | kubectl apply -f -

kubectl create secret tls typha-client-key-pair \
  --cert=certs/typha-client.crt \
  --key=certs/typha-client.key \
  --namespace=kube-system \
  --dry-run=client -o yaml | kubectl apply -f -

kubectl create secret tls typha-server-key-pair \
  --cert=certs/typha-server.crt \
  --key=certs/typha-server.key \
  --namespace=kube-system \
  --dry-run=server -o yaml | kubectl apply -f -

kubectl create secret tls whisker-backend-key-pair \
  --cert=certs/whisker-backend.crt \
  --key=certs/whisker-backend.key \
  --namespace=kube-system \
  --dry-run=server -o yaml | kubectl apply -f -
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’ve come here from a blog, now is the time to go back. The certificate generation step ends here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tuning Typha to Use Certificates
&lt;/h2&gt;

&lt;p&gt;Our next step is to assign certificates to Typha. To do this, you need to modify the calico-typha manifest by adding a VolumeMount to connect the previously generated secrets to the pods, adding a volume to mount them in the workloads, and setting several environment variables so Typha knows it’s certificate time.&lt;/p&gt;

&lt;p&gt;Use the following command to modify your typha deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch deployment calico-typha &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;strategic &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'
{
  "spec": {
    "template": {
      "spec": {
        "volumes": [
          {
            "name": "config",
            "configMap": {
              "name": "goldmane",
              "defaultMode": 420
            }
          },
          {
            "name": "goldmane-ca-bundle",
            "configMap": {
              "name": "goldmane-ca-bundle",
              "defaultMode": 420
            }
          },
          {
            "name": "typha-server-key-pair",
            "secret": {
              "secretName": "typha-server-key-pair",
              "defaultMode": 420
            }
          }
        ],
        "containers": [
          {
            "name": "calico-typha",
            "env": [
              {
                "name": "TYPHA_CAFILE",
                "value": "/etc/pki/tls/certs/tigera-ca-bundle.crt"
              },
              {
                "name": "TYPHA_SERVERCERTFILE",
                "value": "/typha-server-key-pair/tls.crt"
              },
              {
                "name": "TYPHA_SERVERKEYFILE",
                "value": "/typha-server-key-pair/tls.key"
              },
              {
                "name": "TYPHA_CLIENTCN",
                "value": "typha-client"
              }
            ],
            "volumeMounts": [
              {
                "name": "goldmane-ca-bundle",
                "mountPath": "/etc/pki/tls/certs",
                "readOnly": true
              },
              {
                "name": "typha-server-key-pair",
                "mountPath": "/typha-server-key-pair",
                "readOnly": true
              }
            ]
          }
        ]
      }
    }
  }
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: We also share the CA certificate (but not the key). Since our CA is not a public or widely trusted CA, we use the &lt;code&gt;TYPHA_CAFILE&lt;/code&gt; environment variable to inject the CA certificate into the Typha process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tunning calico-node to use certificates
&lt;/h2&gt;

&lt;p&gt;Now we need to do the same for &lt;code&gt;calico-node&lt;/code&gt; daemonset,&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Since &lt;code&gt;calico-node&lt;/code&gt; acts as a client, we set the &lt;code&gt;FELIX_TYPHACN&lt;/code&gt; environment variable to &lt;code&gt;typha-server&lt;/code&gt; so it matches the certificate issued earlier to the server. This provides &lt;code&gt;calico-node&lt;/code&gt; with an additional verification check.&lt;/p&gt;

&lt;p&gt;Use the following command to configure the &lt;code&gt;calico-node&lt;/code&gt; daemonset to use certificates for communication with Typha:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch ds calico-node &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;strategic &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'
{
  "spec": {
    "template": {
      "spec": {
        "volumes": [
          {
            "name": "config",
            "configMap": {
              "name": "goldmane",
              "defaultMode": 420
            }
          },
          {
            "name": "goldmane-ca-bundle",
            "configMap": {
              "name": "goldmane-ca-bundle",
              "defaultMode": 420
            }
          },
          {
            "name": "typha-client-key-pair",
            "secret": {
              "secretName": "typha-client-key-pair",
              "defaultMode": 420
            }
          }
        ],
        "containers": [
          {
            "name": "calico-node",
            "env": [
              {
                "name": "FELIX_TYPHACAFILE",
                "value": "/etc/pki/tls/certs/tigera-ca-bundle.crt"
              },
              {
                "name": "FELIX_TYPHACERTFILE",
                "value": "/typha-client-key-pair/tls.crt"
              },
              {
                "name": "FELIX_TYPHAKEYFILE",
                "value": "/typha-client-key-pair/tls.key"
              },
              {
                "name": "FELIX_TYPHAK8SNAMESPACE",
                "value": "kube-system"
              },
              {
                "name": "FELIX_TYPHACN",
                "value": "typha-server"
              }
            ],
            "volumeMounts": [
              {
                "name": "goldmane-ca-bundle",
                "mountPath": "/etc/pki/tls/certs",
                "readOnly": true
              },
              {
                "name": "typha-client-key-pair",
                "mountPath": "/typha-client-key-pair",
                "readOnly": true
              }
            ]
          }
        ]
      }
    }
  }
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before proceeding to the next step, use the following command to ensure your changes have been applied to Calico and Typha.&lt;/p&gt;

&lt;p&gt;Use the following command to verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system ds/calico-node deployment/calico-typha
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying Goldmane
&lt;/h2&gt;

&lt;p&gt;Now that our Calico and Typha workloads are running and secured, let's start deploying Goldmane and Whisker.&lt;/p&gt;

&lt;p&gt;First, we need a ServiceAccount. Service accounts provide workloads with an identity, allowing administrators to manage their permissions and access.&lt;/p&gt;

&lt;p&gt;Use the following command to generate a service account for Goldmane:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; -&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: v1
kind: ServiceAccount
metadata:
  name: goldmane
  namespace: kube-system
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to create a Service that points to the Goldmane ingestion port. This Service will be used by any emitter (in this case, &lt;code&gt;calico-node&lt;/code&gt;) to send flow data to the Goldmane server.&lt;/p&gt;

&lt;p&gt;Use the following command to generate the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create -f -&amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Service
metadata:
  name: goldmane
  namespace: kube-system
spec:
  ports:
  - port: 7443
    protocol: TCP
    targetPort: 7443
  selector:
    k8s-app: goldmane
  sessionAffinity: None
  type: ClusterIP
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Goldmane binary looks for a configuration file at startup, so we need to provide that file when launching Goldmane.&lt;/p&gt;

&lt;p&gt;Use the following command to generate the config file as a ConfigMap.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; -&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: v1
data:
  config.json: '{"emitFlows":false}'
kind: ConfigMap
metadata:
  name: goldmane
  namespace: kube-system
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tunning Goldmane Deployment
&lt;/h3&gt;

&lt;p&gt;The Goldmane binary looks for specific indicators to adjust its behavior. Since certificates have a finite lifetime, we use environment variables to specify which certificates Goldmane should use.&lt;/p&gt;

&lt;p&gt;Examine the following block, it is an example that illustrate the required changes in Goldmane deployment that you will deploy later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;        - name: SERVER_CERT_PATH
          value: /goldmane-key-pair/tls.crt
        - name: SERVER_KEY_PATH
          value: /goldmane-key-pair/tls.key
        - name: CA_CERT_PATH
          value: /etc/pki/tls/certs/tigera-ca-bundle.crt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to the previous steps, we need to mount the VolumeMounts to the workload so they can be accessed from within the workload.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;      volumes:
      - configMap:
          defaultMode: 420
          name: goldmane
        name: config
      - configMap:
          defaultMode: 420
          name: goldmane-ca-bundle
        name: goldmane-ca-bundle
      - name: goldmane-key-pair
        secret:
          defaultMode: 420
          secretName: goldmane-key-pair
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we need to mount these VolumeMounts at their respective paths inside the workload.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;        volumeMounts:
        - mountPath: /config
          name: config
          readOnly: &lt;span class="nb"&gt;true&lt;/span&gt;
        - mountPath: /etc/pki/tls/certs
          name: goldmane-ca-bundle
          readOnly: &lt;span class="nb"&gt;true&lt;/span&gt;
        - mountPath: /goldmane-key-pair
          name: goldmane-key-pair
          readOnly: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The following Gist file contains all the changes we reviewed above.&lt;/p&gt;

&lt;p&gt;Use the following command to deploy Goldmane:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://gist.githubusercontent.com/frozenprocess/5555ec3266133e53510b4bda59f38e42/raw/7a7203c6ff7bb8b4cde29b94a6653bc016258b94/goldmane.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying Whisker UI
&lt;/h2&gt;

&lt;p&gt;Now that Goldmane is up and running, it’s time to set up the Whisker UI.&lt;/p&gt;

&lt;p&gt;First, we need to create a ServiceAccount. Service accounts provide workloads with an identity, allowing administrators to control and manage their permissions.&lt;/p&gt;

&lt;p&gt;Use the following command to create a ServiceAccount for Whisker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; -&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: v1
kind: ServiceAccount
metadata:
  name: whisker
  namespace: kube-system
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to create a Service that points to the Whisker UI port. This Service will be used by Whisker users (administrators) to view flow logs and policies within Whisker. Keep in mind, we recommend using a &lt;code&gt;ClusterIP&lt;/code&gt; service since flow logs contain sensitive information.&lt;/p&gt;

&lt;p&gt;Use the following command to generate the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; -&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
apiVersion: v1
kind: Service
metadata:
  name: whisker
  namespace: kube-system
spec:
  ports:
  - port: 8081
    protocol: TCP
    targetPort: 8081
  selector:
    k8s-app: whisker
  sessionAffinity: None
  type: ClusterIP
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Tunning Whisker Deployment
&lt;/h3&gt;

&lt;p&gt;Similar to the Goldmane binary, Whisker looks for specific indicators to adjust its behavior. Since certificates have a finite lifetime, we use environment variables to specify which certificates Whisker should use. Additionally, Whisker needs a URL to locate Goldmane, which is provided by setting the &lt;code&gt;GOLDMANE_HOST&lt;/code&gt; environment variable in the deployment.&lt;/p&gt;

&lt;p&gt;Examine the following block, it is an example that illustrate the required changes in Whisker deployment that you will deploy later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;      - &lt;span class="nb"&gt;env&lt;/span&gt;:
        - name: LOG_LEVEL
          value: INFO
        - name: PORT
          value: &lt;span class="s2"&gt;"3002"&lt;/span&gt;
        - name: GOLDMANE_HOST
          value: goldmane.kube-system.svc.cluster.local:7443
        - name: TLS_CERT_PATH
          value: /whisker-backend-key-pair/tls.crt
        - name: TLS_KEY_PATH
          value: /whisker-backend-key-pair/tls.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to the previous parts we need to mount the VolumeMounts to the workload so they an be accessed inside the workload.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;        volumeMounts:
        - mountPath: /whisker-backend-key-pair
          name: whisker-backend-key-pair
          readOnly: &lt;span class="nb"&gt;true&lt;/span&gt;
        - mountPath: /etc/pki/tls/certs
          name: goldmane-ca-bundle
          readOnly: &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we need to mount these VolumeMounts in their respective path inside the workload.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;      volumes:
      - name: whisker-backend-key-pair
        secret:
          defaultMode: 420
          secretName: whisker-backend-key-pair
      - configMap:
          defaultMode: 420
          name: goldmane-ca-bundle
        name: goldmane-ca-bundle
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The following Gist file contains all the changes we reviewed above.&lt;/p&gt;

&lt;p&gt;Use the following command to deploy Whisker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; https://gist.githubusercontent.com/frozenprocess/5555ec3266133e53510b4bda59f38e42/raw/7a7203c6ff7bb8b4cde29b94a6653bc016258b94/whisker.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can verify the Goldmane and Whisker deployments in a manifest-based install by using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system deployment/goldmane
kubectl rollout status &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system deployment/whisker
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Accessing Whisker
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The following step must be running if you want to access the Whisker UI using a ClusterIP service.&lt;/p&gt;

&lt;p&gt;Open a new terminal and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system deployment/whisker 8081:8081
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, open a browser and navigate to &lt;code&gt;localhost:8081&lt;/code&gt;. You should see the Whisker UI. However, no flows will appear because flow generation happens in Felix (the "Brian" of Calico), and we haven’t yet configured Calico to enable it.&lt;/p&gt;

&lt;p&gt;It’s time to tweak the environment variables again!&lt;/p&gt;

&lt;h2&gt;
  
  
  Instructing Felix to generate Flows Logs and ship them to Goldmane
&lt;/h2&gt;

&lt;p&gt;Now that all the certificates are in place, we just need to set two more environment variables in the &lt;code&gt;calico-node&lt;/code&gt; daemonset to instruct Felix to enable flow log generation and specify where to send them—our Goldmane server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;        - name: FELIX_FLOWLOGSGOLDMANESERVER
          value: goldmane.kube-system.svc:7443
        - name: FELIX_FLOWLOGSFLUSHINTERVAL
          value: &lt;span class="s2"&gt;"15"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Use the following command to enable Flow Log generation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch ds calico-node &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;strategic &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'
{
  "spec": {
    "template": {
      "spec": {
        "containers": [
          {
            "name": "calico-node",
            "env": [
              {
                "name": "FELIX_FLOWLOGSGOLDMANESERVER",
                "value": "goldmane.kube-system.svc:7443"
              },
              {
                "name": "FELIX_FLOWLOGSFLUSHINTERVAL",
                "value": "15"
              }
            ]
          }
        ]
      }
    }
  }
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After this change, it will take a moment for all &lt;code&gt;calico-node&lt;/code&gt; pods in your cluster to restart. However, if you check the Whisker UI, you might not see any flow logs appear immediately.&lt;/p&gt;

&lt;p&gt;Ah, it's puzzle!&lt;/p&gt;

&lt;h3&gt;
  
  
  It is always DNS
&lt;/h3&gt;

&lt;p&gt;Up to this point, we have configured everything required for Goldmane and Whisker to run, and in an ideal scenario, things should work as expected. However, if you check the &lt;code&gt;calico-node&lt;/code&gt; logs, you may see a warning similar to the following:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can use the following command to check for these errors in your environment:&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-fn&lt;/span&gt; kube-system ds/calico-node
&lt;/code&gt;&lt;/pre&gt;



&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;2025-07-17 14:21:38.502 &lt;span class="o"&gt;[&lt;/span&gt;WARNING][75] felix/client.go 175: Failed to connect to flow server &lt;span class="nv"&gt;error&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rpc error: code &lt;span class="o"&gt;=&lt;/span&gt; Unavailable desc &lt;span class="o"&gt;=&lt;/span&gt; name resolver error: produced zero addresses &lt;span class="nv"&gt;target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"dns:///goldmane.kube-system.svc:7443"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ah, it seems there’s an issue. Back to Kubernetes 101: Pods can be connected either to the host network—meaning they communicate using the node’s IP address—or to a namespace network. This behavior is controlled by the &lt;code&gt;hostNetwork: true&lt;/code&gt; field in your DaemonSet, Deployment, or Pod specification.&lt;/p&gt;

&lt;p&gt;You might be wondering, what does this have to do with our DNS issue? Well, everything. From the perspective of &lt;code&gt;calico-node&lt;/code&gt;, it tries to resolve the internal DNS record &lt;code&gt;goldmane.kube-system.svc&lt;/code&gt;. However, this query is sent to the host’s DNS server and forwarders, which don’t know anything about Kubernetes records. To fix this, we need to change the &lt;code&gt;dnsPolicy&lt;/code&gt; of our daemonset to &lt;code&gt;ClusterFirstWithHostNet&lt;/code&gt;. This setting allows DNS queries to be routed through the cluster DNS first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl patch ds calico-node &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;strategic &lt;span class="nt"&gt;-p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
{
  &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;spec&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: {
    &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;template&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: {
      &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;spec&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: {
        &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;dnsPolicy&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;: &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;ClusterFirstWithHostNet&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;
      }
    }
  }
}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should be able to see some flows in your Whisker UI shortly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86j0hnboi3rcfwgqm2ha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86j0hnboi3rcfwgqm2ha.png" alt="Calico Whisker UI" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, you are running Calico Whisker in a manifest based installation!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>projectcalico</category>
      <category>devops</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
