<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Achyuta Das</title>
    <description>The latest articles on DEV Community by Achyuta Das (@achu1612).</description>
    <link>https://dev.to/achu1612</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/achu1612"/>
    <language>en</language>
    <item>
      <title>HA K8s cluster using kube-vip</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Fri, 09 Jan 2026 15:18:20 +0000</pubDate>
      <link>https://dev.to/achu1612/ha-k8s-cluster-using-kube-vip-48jn</link>
      <guid>https://dev.to/achu1612/ha-k8s-cluster-using-kube-vip-48jn</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;A stacked HA cluster is a topology where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components.&lt;/p&gt;

&lt;p&gt;Each control plane node runs an instance of the &lt;code&gt;kube-apiserver&lt;/code&gt;, &lt;code&gt;kube-scheduler&lt;/code&gt;, and &lt;code&gt;kube-controller-manager&lt;/code&gt;. The &lt;code&gt;kube-apiserver&lt;/code&gt; is exposed to worker nodes using a load balancer.&lt;/p&gt;

&lt;p&gt;Each control plane node creates a local etcd member and this etcd member communicates only with the &lt;code&gt;kube-apiserver&lt;/code&gt; of this node. The same applies to the local &lt;code&gt;kube-controller-manager&lt;/code&gt; and &lt;code&gt;kube-scheduler&lt;/code&gt; instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18tgq55p7eu1roi993ru.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18tgq55p7eu1roi993ru.png" alt=" " width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster with external etcd nodes, and simpler to manage for replication.&lt;/p&gt;

&lt;p&gt;Here's what happens in a 3-node stacked cluster:&lt;br&gt;
Each control plane node runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;etcd member&lt;/li&gt;
&lt;li&gt;kube-apiserver, scheduler, controller-manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 etcd members → quorum = 2&lt;/li&gt;
&lt;li&gt;3 API servers → load balanced (can handle 1 down)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If one node fails: You still have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 etcd members → quorum maintained&lt;/li&gt;
&lt;li&gt;2 control plane instances → still available&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the default topology deployed by kubeadm. A local etcd member is created automatically on control plane nodes when using &lt;code&gt;kubeadm init&lt;/code&gt; and &lt;code&gt;kubeadm join --control-plane&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Assumptions: You have done cluster bootstrapping using &lt;code&gt;kubeadm&lt;/code&gt; before as this document won’t cover everything in detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Kubernetes Static Pods
&lt;/h2&gt;

&lt;p&gt;Static Pods are Kubernetes Pods that are run by the &lt;code&gt;kubelet&lt;/code&gt; on a single node and are not managed by the Kubernetes cluster itself. This means that whilst the Pod can appear within Kubernetes, it can't make use of a variety of Kubernetes functionality (such as the Kubernetes token or ConfigMap resources). The static Pod approach is primarily required for &lt;code&gt;kubeadm&lt;/code&gt; as this is due to the sequence of actions performed by &lt;code&gt;kubeadm&lt;/code&gt;. Ideally, we want &lt;code&gt;kube-vip&lt;/code&gt; to be part of the Kubernetes cluster, but for various bits of functionality we also need &lt;code&gt;kube-vip&lt;/code&gt; to provide a HA virtual IP as part of the installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the infrastructure
&lt;/h2&gt;

&lt;p&gt;Prerequisite:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;containerd is installed on the node.&lt;/li&gt;
&lt;li&gt;kubeadm, kubelet, kubectl are install on the node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To set up &lt;code&gt;kube-vip&lt;/code&gt; for Kubernetes High Availability (HA) with 3 master nodes and a Virtual IP (VIP), follow this structured approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Masters: &lt;code&gt;10.238.40.162&lt;/code&gt;, &lt;code&gt;10.238.40.163&lt;/code&gt;, &lt;code&gt;10.238.40.164&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;VIP: &lt;code&gt;10.238.40.166&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this validation, I have used Static Pods way of setting up &lt;code&gt;kube-vip&lt;/code&gt; on the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cluster bootstrap
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Generate static pod manifest
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VIP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.238.40.166
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;INTERFACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;enp19s0

&lt;span class="c"&gt;# at the time of writing this doc the latest version if v0.9.2&lt;/span&gt;
&lt;span class="nv"&gt;KVVERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-sL&lt;/span&gt; https://api.github.com/repos/kube-vip/kube-vip/releases | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s2"&gt;".[0].name"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# we will be using kube-vip container to generate the manifest&lt;/span&gt;
&lt;span class="nb"&gt;alias &lt;/span&gt;kube-vip&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ctr image pull ghcr.io/kube-vip/kube-vip:&lt;/span&gt;&lt;span class="nv"&gt;$KVVERSION&lt;/span&gt;&lt;span class="s2"&gt;; ctr run --rm --net-host ghcr.io/kube-vip/kube-vip:&lt;/span&gt;&lt;span class="nv"&gt;$KVVERSION&lt;/span&gt;&lt;span class="s2"&gt; vip /kube-vip"&lt;/span&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/kubernetes/manifests

kube-vip manifest pod &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--interface&lt;/span&gt; &lt;span class="nv"&gt;$INTERFACE&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--address&lt;/span&gt; &lt;span class="nv"&gt;$VIP&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--controlplane&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--services&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--arp&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--leaderElection&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; /etc/kubernetes/manifests/kube-vip.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Issues during getting a lease for leader election
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
E0710 19:09:28.208775       1 leaderelection.go:332] error retrieving resource lock kube-system/plndr-cp-lock: leases.coordination.k8s.io "plndr-cp-lock" is forbidden: User "kubernetes-admin" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is an open issue with &lt;code&gt;kube-vip&lt;/code&gt; needed &lt;code&gt;super-admin.conf&lt;/code&gt; access to boot up. This is the case from &lt;code&gt;kubeadm v1.29.x&lt;/code&gt; onwards.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubeadm init&lt;/code&gt; is doing the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;writes an &lt;code&gt;admin.conf&lt;/code&gt; that has no binding to &lt;code&gt;cluster-admin&lt;/code&gt; yet&lt;/li&gt;
&lt;li&gt;writes a &lt;code&gt;super-admin.conf&lt;/code&gt; that has &lt;code&gt;system:masters&lt;/code&gt; (super user)&lt;/li&gt;
&lt;li&gt;create &lt;code&gt;cluster-admin&lt;/code&gt; binding using the &lt;code&gt;super-admin&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If kube-vip needs permission during setup it can, either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;wait for &lt;code&gt;admin.conf&lt;/code&gt; to receive permissions (not possible AFAIK, given the bootstrap sequence)&lt;/li&gt;
&lt;li&gt;&lt;p&gt;use &lt;code&gt;super-admin.conf&lt;/code&gt; during bootstrap and then moving to &lt;code&gt;admin.conf&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modify the static pod manifest to use the &lt;code&gt;super-admin.conf&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s#path: /etc/kubernetes/admin.conf#path: /etc/kubernetes/super-admin.conf#'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
          /etc/kubernetes/manifests/kube-vip.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Initialize the first master node; Use the VIP as the cluster endpoint while running the &lt;code&gt;kubeadm init&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;kubeadm init  &lt;span class="nt"&gt;--control-plane-endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"10.238.40.166:6443"&lt;/span&gt;  &lt;span class="nt"&gt;--upload-certs&lt;/span&gt;  &lt;span class="nt"&gt;--apiserver-cert-extra-sans&lt;/span&gt; &lt;span class="s2"&gt;"10.238.40.166,10.238.40.164,10.238.40.163,10.238.40.162,10.96.0.1,127.0.0.1,0.0.0.0"&lt;/span&gt;  &lt;span class="nt"&gt;--apiserver-advertise-address&lt;/span&gt; 10.238.40.162 &lt;span class="nt"&gt;-v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Modify the static pod manifest to use the &lt;code&gt;admin.conf&lt;/code&gt;. Once the first node is initialized. update the static pod manifest which will restart the &lt;code&gt;kube-vip&lt;/code&gt; pod. The access to the cluster will be lose for couple of minutes.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s1"&gt;'s#path: /etc/kubernetes/super-admin.conf#path: /etc/kubernetes/admin.conf#'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
          /etc/kubernetes/manifests/kube-vip.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Copy the static pod manifest to other master nodes before adding them to the control plane
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp /etc/kubernetes/manifests/kube-vip.yaml root@10.238.40.163:/etc/kubernetes/manifests
scp /etc/kubernetes/manifests/kube-vip.yaml root@10.238.40.164:/etc/kubernetes/manifests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run the control plane node join command (output of the kubeadm init) on the other master nodes.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;10.238.40.166:8443 &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--control-plane&lt;/span&gt; &lt;span class="nt"&gt;--certificate-key&lt;/span&gt; &amp;lt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have successfully deployed a highly available Kubernetes cluster using a stacked etcd topology with &lt;code&gt;kube-vip&lt;/code&gt;.&lt;/p&gt;

</description>
      <category>cluster</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>HA K8s cluster using Keepalived and HAProxy</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Fri, 09 Jan 2026 14:59:10 +0000</pubDate>
      <link>https://dev.to/achu1612/ha-k8s-cluster-using-keepalived-and-haproxy-439</link>
      <guid>https://dev.to/achu1612/ha-k8s-cluster-using-keepalived-and-haproxy-439</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;A stacked HA cluster is a topology where the distributed data storage cluster provided by etcd is stacked on top of the cluster formed by the nodes managed by kubeadm that run control plane components.&lt;/p&gt;

&lt;p&gt;Each control plane node runs an instance of the kube-apiserver, kube-scheduler, and kube-controller-manager. The kube-apiserver is exposed to worker nodes using a load balancer.&lt;/p&gt;

&lt;p&gt;Each control plane node creates a local etcd member and this etcd member communicates only with the kube-apiserver of this node. The same applies to the local kube-controller-manager and kube-scheduler instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6ukc1qguvc4ok5jx6fz.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6ukc1qguvc4ok5jx6fz.webp" alt=" " width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This topology couples the control planes and etcd members on the same nodes. It is simpler to set up than a cluster with external etcd nodes, and simpler to manage for replication.&lt;/p&gt;

&lt;p&gt;Here's what happens in a 3-node stacked cluster:&lt;/p&gt;

&lt;p&gt;Each control plane node runs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;etcd member&lt;/li&gt;
&lt;li&gt;kube-apiserver, scheduler, controller-manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So, you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3 etcd members → quorum = 2&lt;/li&gt;
&lt;li&gt;3 API servers → load balanced (can handle 1 down)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If one node fails: You still have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;2 etcd members → quorum maintained&lt;/li&gt;
&lt;li&gt;2 control plane instances → still available&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the default topology deployed by kubeadm. A local etcd member is created automatically on control plane nodes when using &lt;code&gt;kubeadm init&lt;/code&gt; and &lt;code&gt;kubeadm join --control-plane&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assumptions&lt;/strong&gt;: You have done cluster bootstrapping using &lt;code&gt;kubeadm&lt;/code&gt; before as this document won’t cover everything in detail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the machines
&lt;/h2&gt;

&lt;p&gt;To set up &lt;strong&gt;HAProxy + Keepalived&lt;/strong&gt; for Kubernetes High Availability (HA) with 3 master nodes and a Virtual IP (VIP), follow this structured approach:&lt;/p&gt;

&lt;p&gt;Masters: 10.238.40.162, 10.238.40.163, 10.238.40.164&lt;br&gt;
VIP: 10.238.40.166&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install HAProxy + Keepalived on all 3 Masters
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update 
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; haproxy keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;HAProxy Configuration
Edit &lt;code&gt;/etc/haproxy/haproxy.cfg&lt;/code&gt; on all 3 master nodes:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;global
    &lt;span class="nb"&gt;chroot&lt;/span&gt; /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats &lt;span class="nb"&gt;timeout &lt;/span&gt;30s
    user haproxy
    group haproxy
    daemon

defaults
    mode http
    &lt;span class="nb"&gt;timeout &lt;/span&gt;connect 5000ms
    &lt;span class="nb"&gt;timeout &lt;/span&gt;client 50000ms
    &lt;span class="nb"&gt;timeout &lt;/span&gt;server 50000ms
    option httplog
    option dontlognull

frontend kubernetes-apiserver
    &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;:8443
    mode tcp
    option tcplog
    default_backend kubernetes-apiserver

backend kubernetes-apiserver
    mode tcp
    balance roundrobin
    option tcp-check
    server master1 10.238.40.162:6443 check fall 3 rise 2
    server master2 10.238.40.163:6443 check fall 3 rise 2
    server master3 10.238.40.164:6443 check fall 3 rise 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Keepalived Configuration
Only one node at a time will "own" the VIP (managed by Keepalived), but config is present on all.
Edit &lt;code&gt;/etc/keepalived/keepalived.conf&lt;/code&gt; on each master nodes:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Change the &lt;code&gt;priority&lt;/code&gt; value for each node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Master1: priority 110 (MASTER)&lt;/li&gt;
&lt;li&gt;Master2: priority 100 (BACKUP)&lt;/li&gt;
&lt;li&gt;Master3: priority 90 (BACKUP)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;global_defs &lt;span class="o"&gt;{&lt;/span&gt;
    router_id LVS_DEVEL
    script_user root
    enable_script_security
&lt;span class="o"&gt;}&lt;/span&gt;

vrrp_script chk_haproxy &lt;span class="o"&gt;{&lt;/span&gt;
    script &lt;span class="s2"&gt;"/bin/curl -f http://localhost:6443/healthz || exit 1"&lt;/span&gt;
    interval 2
    weight &lt;span class="nt"&gt;-2&lt;/span&gt;
    fall 3
    rise 2
&lt;span class="o"&gt;}&lt;/span&gt;

vrrp_instance VI_1 &lt;span class="o"&gt;{&lt;/span&gt;
    state MASTER
    interface enp19s0
    virtual_router_id 51
    priority 110
    advert_int 1
    authentication &lt;span class="o"&gt;{&lt;/span&gt;
        auth_type PASS
        auth_pass k8s-ha-cluster
    &lt;span class="o"&gt;}&lt;/span&gt;
    virtual_ipaddress &lt;span class="o"&gt;{&lt;/span&gt;
        10.238.40.166/24
    &lt;span class="o"&gt;}&lt;/span&gt;
    track_script &lt;span class="o"&gt;{&lt;/span&gt;
        chk_haproxy
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Restart HAProxy and Keepalived
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart haproxy keepalived
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;haproxy keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Validate the VIP appears on one node
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ip addr show | &lt;span class="nb"&gt;grep &lt;/span&gt;10.238.40.166
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxcydupi50001bgzsa0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxcydupi50001bgzsa0s.png" alt=" " width="800" height="155"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check service status
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status haproxy
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Bootstrap the cluster
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a &lt;code&gt;kubeadm-config.yaml&lt;/code&gt; file on the first master node
Make sure to use the VIP as the control plane endpoint, and include it in the &lt;code&gt;apiServer.certSANs&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: Change the &lt;code&gt;advertiseAddress&lt;/code&gt; field in InitConfiguration to match each master node's IP address.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: v1.32.6
apiServer:
  certSANs:
    - "10.238.40.166"      # VIP
    - "127.0.0.1"           # Localhost
    - "0.0.0.0"             # Wildcard
    - "10.96.0.1"           # Kubernetes service IP
    - "10.238.40.162"
    - "10.238.40.163"
    - "10.238.40.164"
  extraArgs:
    authorization-mode: Node,RBAC
certificatesDir: /etc/kubernetes/pki
clusterName: pcai
controlPlaneEndpoint: "10.238.40.166:8443"
controllerManager:
  extraArgs:
    bind-address: 0.0.0.0
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
networking:
  dnsDomain: cluster.local
  podSubnet: "172.20.0.0/16"
  serviceSubnet: "172.30.0.0/16"
scheduler:
  extraArgs:
    bind-address: 0.0.0.0
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "10.238.40.162"
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Initialize the cluster
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm init &lt;span class="nt"&gt;--upload-certs&lt;/span&gt; &lt;span class="nt"&gt;--config&lt;/span&gt; kubeadm-config.yaml &lt;span class="nt"&gt;-v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Save the output! It contains the join commands for control plane and worker nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure kubectl access
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube
&lt;span class="nb"&gt;sudo cp&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; /etc/kubernetes/admin.conf &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="nv"&gt;$HOME&lt;/span&gt;/.kube/config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install your choice of networking solutions
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Wait for networking pods to be ready
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ready pod &lt;span class="nt"&gt;-l&lt;/span&gt; k8s-app&lt;span class="o"&gt;=&lt;/span&gt;calico-node &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Run the control plane node join command (output of the kubeadm init) on the other master nodes.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm &lt;span class="nb"&gt;join &lt;/span&gt;10.238.40.166:8443 &lt;span class="nt"&gt;--token&lt;/span&gt; &amp;lt;token&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--discovery-token-ca-cert-hash&lt;/span&gt; sha256:&amp;lt;&lt;span class="nb"&gt;hash&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="nt"&gt;--control-plane&lt;/span&gt; &lt;span class="nt"&gt;--certificate-key&lt;/span&gt; &amp;lt;cert-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The certificate key is only valid for 2 hours. If it expires, generate a new one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubeadm init phase upload-certs &lt;span class="nt"&gt;--upload-certs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verification and Health Checks
&lt;/h2&gt;

&lt;p&gt;After setting up all control plane nodes, verify the cluster health:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check all nodes are ready
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify control plane components
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check etcd cluster health
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system etcd-&amp;lt;master-node-name&amp;gt; &lt;span class="nt"&gt;--&lt;/span&gt; etcdctl &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--endpoints&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://127.0.0.1:2379 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cacert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/ca.crt &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.crt &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/etc/kubernetes/pki/etcd/server.key &lt;span class="se"&gt;\&lt;/span&gt;
  member list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Test VIP failover
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Stop keepalived on the master node that owns the VIP&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl stop keepalived

&lt;span class="c"&gt;# Verify VIP moves to another node&lt;/span&gt;
ip addr show | &lt;span class="nb"&gt;grep &lt;/span&gt;10.238.40.166

&lt;span class="c"&gt;# Test API access via VIP&lt;/span&gt;
curl &lt;span class="nt"&gt;-k&lt;/span&gt; https://10.238.40.166:8443/healthz

&lt;span class="c"&gt;# Restart keepalived&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start keepalived
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You have successfully deployed a highly available Kubernetes cluster using a stacked etcd topology with HAProxy and Keepalived. This setup provides:&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High Availability&lt;/strong&gt;: Automatic failover with no single point of failure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Distribution&lt;/strong&gt;: Traffic distributed across all API servers via HAProxy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Recovery&lt;/strong&gt;: Keepalived handles VIP failover in seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified Architecture&lt;/strong&gt;: Stacked topology reduces complexity compared to external etcd&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cluster Capabilities
&lt;/h3&gt;

&lt;p&gt;With this 3-master node configuration:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tolerates &lt;strong&gt;1 node failure&lt;/strong&gt; while maintaining full cluster functionality&lt;/li&gt;
&lt;li&gt;Maintains &lt;strong&gt;etcd quorum&lt;/strong&gt; with 2 out of 3 members&lt;/li&gt;
&lt;li&gt;Continues serving API requests through the remaining healthy masters&lt;/li&gt;
&lt;li&gt;Automatically fails over VIP to operational nodes&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cluster</category>
      <category>kubernetes</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Optional FDE in ubuntu using initrd hooks</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Wed, 25 Jun 2025 06:35:33 +0000</pubDate>
      <link>https://dev.to/achu1612/optional-fde-in-ubuntu-using-initrd-hooks-5ao1</link>
      <guid>https://dev.to/achu1612/optional-fde-in-ubuntu-using-initrd-hooks-5ao1</guid>
      <description>&lt;h2&gt;
  
  
  🔍 Context:
&lt;/h2&gt;

&lt;p&gt;Ubuntu’s Autoinstall (Subiquity) typically bakes full disk encryption directly into autoinstall.yaml, making it an all-or-nothing setup—either every install uses encryption, or none do. This becomes limiting when you want a single ISO image to support both encrypted and unencrypted installs without user interaction. In this blog, we will show how to use an initrd hook and a simple trigger mechanism to dynamically choose the right config at install time—enabling flexible, environment-aware deployments from a unified base image. Refer the blog post to understand more about how FDE can be done with autoinstall process. &lt;/p&gt;
&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0" class="crayons-story__hidden-navigation-link"&gt;Full Disk Encryption (FDE) with Ubuntu Autoinstall&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/achu1612" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F611956%2F19e33fc2-6b4f-4684-ab35-fa5850c7e2fb.jpg" alt="achu1612 profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/achu1612" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Achyuta Das
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Achyuta Das
                
              
              &lt;div id="story-author-preview-content-2619742" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/achu1612" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F611956%2F19e33fc2-6b4f-4684-ab35-fa5850c7e2fb.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Achyuta Das&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 25 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0" id="article-link-2619742"&gt;
          Full Disk Encryption (FDE) with Ubuntu Autoinstall
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/linux"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;linux&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/security"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;security&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/encryption"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;encryption&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;2&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            3 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;
 
&lt;h2&gt;
  
  
  🛠️ How it works:
&lt;/h2&gt;

&lt;p&gt;The idea is to leverage Ubuntu’s initrd hooks to inject a small script that decides which autoinstall.yaml config to use — with or without full disk encryption — at runtime.&lt;/p&gt;

&lt;p&gt;Here’s the breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Two Configs Inside the ISO

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;fde/user-data&lt;/code&gt; → contains full disk encryption.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nofde/user-data&lt;/code&gt; → no encryption.
Both set of configurations are included in the ISO in a known directory.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A tiny empty image file (~350KB) will be created with a custom label "fde" will be used as the trigger. When installing, if this image is attached (eg: via USB, Floppy, cloud-init disk), it will appear in the system as: &lt;code&gt;/dev/disk/by-label/fde&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Initrd Hook Script
A custom initrd hook script run very early in the boot process, just before we move to the installer fs. It will-

&lt;ul&gt;
&lt;li&gt;Check if the &lt;code&gt;/dev/disk/by-label/fde&lt;/code&gt; exists.&lt;/li&gt;
&lt;li&gt;If yes → copies &lt;code&gt;fde/user-data&lt;/code&gt; to a target folder.&lt;/li&gt;
&lt;li&gt;If no → copies &lt;code&gt;nofde/user-data&lt;/code&gt; instead.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Update the &lt;code&gt;/boot/grub/grub.cfg&lt;/code&gt; to pick up the configuration for autoinstall from the target folder.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🧪 Modifications:
&lt;/h2&gt;

&lt;p&gt;I used Ubuntu 24.04 live server image for this. Let's make a copy of the iso content for us to make our modifications&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/iso ~/edit
&lt;span class="nb"&gt;sudo &lt;/span&gt;mount &lt;span class="nt"&gt;-o&lt;/span&gt; loop ubuntu-24.04.2-live-server-amd64.iso /mnt/iso
rsync &lt;span class="nt"&gt;-a&lt;/span&gt; /mnt/iso/ ~/edit
&lt;span class="nb"&gt;sudo &lt;/span&gt;umount /mnt/iso
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Let's first create a folder in the base iso which will hold both fde and no-fde autoinstall configurations.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/edit
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; cidata/&lt;span class="o"&gt;{&lt;/span&gt;fde,nofde&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; &amp;gt;&amp;gt; cidata/fde/user-data
... Your autoinstall.yaml content with full drive encryption enabled...
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;cidata/fde/meta-data

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; &amp;gt;&amp;gt; cidata/nofde/user-data
... Your autoinstall.yaml content without full drive encryption enabled...
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;span class="nb"&gt;touch &lt;/span&gt;cidata/nofde/meta-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We need to add a hook to the initrd. So let's first extract the existing initrd.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; ~/edit/initrd-unpacked/
&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/edit/initrd-unpacked
unmkinitramfs ~/edit/casper/initrd ~/edit/initrd-unpacked/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a file &lt;code&gt;~/edit/initrd-unpacked/main/scripts/casper-bottom/98mount-cidata&lt;/code&gt; with by below content
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/sh&lt;/span&gt;

&lt;span class="nv"&gt;PREREQ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;

prereqs&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PREREQ&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="nv"&gt;$1&lt;/span&gt; &lt;span class="k"&gt;in
    &lt;/span&gt;prereqs&lt;span class="p"&gt;)&lt;/span&gt;
        prereqs
        &lt;span class="nb"&gt;exit &lt;/span&gt;0
        &lt;span class="p"&gt;;;&lt;/span&gt;
&lt;span class="k"&gt;esac&lt;/span&gt;

&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /root/setup

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-b&lt;/span&gt; /dev/disk/by-label/fde &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /root/cdrom/cidata/fde/. /root/setup/
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"FDE data copied to /root/setup/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/console
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; /root/cdrom/cidata/nofde/. /root/setup/
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"No FDE data found, copied nofde data to /root/setup/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/console
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; root:root /root/setup/

&lt;span class="nb"&gt;exit &lt;/span&gt;0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Provide executable permission to the file and Update the ORDER file located at &lt;code&gt;~/edit/initrd-unpacked/main/scripts/casper-bottom&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# ORDER file before&lt;/span&gt;
...
/scripts/casper-bottom/61desktop_canary_tweaks &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; /conf/param.conf &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; /conf/param.conf
/scripts/casper-bottom/99casperboot &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; /conf/param.conf &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; /conf/param.conf

&lt;span class="c"&gt;# ORDER file after&lt;/span&gt;
...
/scripts/casper-bottom/61desktop_canary_tweaks &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; /conf/param.conf &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; /conf/param.conf
/scripts/casper-bottom/98mount-cidata &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; /conf/param.conf &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; /conf/param.conf
/scripts/casper-bottom/99casperboot &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$@&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; /conf/param.conf &lt;span class="o"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; /conf/param.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now that the changes are done, let's re-build the initrd
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/edit/initrd-unpacked
&lt;span class="nb"&gt;cd &lt;/span&gt;early
find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-print0&lt;/span&gt; | cpio &lt;span class="nt"&gt;--null&lt;/span&gt; &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;newc &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /tmp/initrd
&lt;span class="nb"&gt;cd&lt;/span&gt; ../early2
find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-print0&lt;/span&gt; | cpio &lt;span class="nt"&gt;--null&lt;/span&gt; &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;newc &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /tmp/initrd
&lt;span class="nb"&gt;cd&lt;/span&gt; ../early3
find &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;-print0&lt;/span&gt; | cpio &lt;span class="nt"&gt;--null&lt;/span&gt; &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;newc &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /tmp/initrd
&lt;span class="nb"&gt;cd&lt;/span&gt; ../main
find &lt;span class="nb"&gt;.&lt;/span&gt; | cpio &lt;span class="nt"&gt;--create&lt;/span&gt; &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;newc | xz &lt;span class="nt"&gt;--format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;lzma &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /tmp/initrd
&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ~/edit/casper/initrd
&lt;span class="nb"&gt;mv&lt;/span&gt; /tmp/initrd ~/edit/casper/initrd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Update the &lt;code&gt;grub.cfg&lt;/code&gt; for the autoinstall to pick up configuration from the &lt;code&gt;/setup&lt;/code&gt; path on the filesystem.\
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;menuentry "Install Ubuntu Server" {
    set gfxpayload=keep
    linux   /casper/vmlinuz quiet autoinstall ds=nocloud\;s=/setup apparmor=0 ---  ---
    initrd  /casper/initrd
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Generate the md5 sums.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/edit
&lt;span class="nb"&gt;sudo rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; md5sum.txt
&lt;span class="nb"&gt;sudo &lt;/span&gt;bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"find . -type f   ! -path './isolinux/*'   ! -path './boot/*'   -print0   | xargs -0 md5sum &amp;gt; md5sum.txt"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Rebuild the iso.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;xorriso &lt;span class="nt"&gt;-as&lt;/span&gt; mkisofs &lt;span class="nt"&gt;-J&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-V&lt;/span&gt; Ubuntu-Server &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-o&lt;/span&gt; custom.iso &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--grub2-mbr&lt;/span&gt; &amp;lt;path to your custom xx-Boot-NoEmul.img&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-partition_offset&lt;/span&gt; 16 &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;--mbr-force-bootable&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-append_partition&lt;/span&gt; 2 &amp;lt;GUID&amp;gt; &amp;lt;path to your custom xx-Boot-NoEmul.img&amp;gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-appended_part_as_gpt&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-iso_mbr_part_type&lt;/span&gt; &amp;lt;GUID&amp;gt;&lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s1"&gt;'boot.catalog'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s1"&gt;'boot/grub/i386-pc/eltorito.img'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-no-emul-boot&lt;/span&gt; &lt;span class="nt"&gt;-boot-load-size&lt;/span&gt; 4 &lt;span class="nt"&gt;-boot-info-table&lt;/span&gt; &lt;span class="nt"&gt;--grub2-boot-info&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-eltorito-alt-boot&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'--interval:appended_partition_2:::'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      &lt;span class="nt"&gt;-no-emul-boot&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
      ~/edit/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;With this setup, we have made Ubuntu's Autoinstall system a tad bit smarter and more flexible- capable of deciding at runtime whether to apply full disk encryption, all without any user interaction or multiple full ISO images; all we need is to mount an extra empty, labeled image. This approach is a bit better for reproducible, automated deployments.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>encryption</category>
      <category>security</category>
    </item>
    <item>
      <title>Full Disk Encryption (FDE) with Ubuntu Autoinstall</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Wed, 25 Jun 2025 05:29:15 +0000</pubDate>
      <link>https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0</link>
      <guid>https://dev.to/achu1612/full-disk-encryption-fde-with-ubuntu-autoinstall-2dk0</guid>
      <description>&lt;h2&gt;
  
  
  🔍 Context
&lt;/h2&gt;

&lt;p&gt;As system administrators and security-conscious developers, encrypting data at rest is a fundamental best practice—especially for laptops, servers in untrusted environments, or sensitive workloads. Full Disk Encryption (FDE) ensures that all data on the disk is encrypted, and can only be accessed after providing a unlocking key, thus safeguarding data even if physical access to the disk is obtained.&lt;/p&gt;

&lt;p&gt;By leveraging cloud-init's autoinstall YAML, we can fully automate the provisioning process—including LUKS encryption, LVM setup, and embedding a drive unlocking key into the initramfs.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Implementation Overview
&lt;/h2&gt;

&lt;p&gt;Here’s a high-level overview of what we will be doing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use LUKS to encrypt the main data partition.&lt;/li&gt;
&lt;li&gt;Inside the encrypted container, create a LVM volume group to manage the logical volumes.&lt;/li&gt;
&lt;li&gt;Generate a random key file early in the installation process, which is used to unlock the encrypted volume.&lt;/li&gt;
&lt;li&gt;Ensured this key is securely copied into the target system and embedded into the &lt;code&gt;initramfs&lt;/code&gt; so the system can boot without manual passphrase entry.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧪 Workflow
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Early Commands: Generate a 4KB random keyfile (&lt;code&gt;root.key&lt;/code&gt;) under &lt;code&gt;/etc/cryptsetup-keys.d&lt;/code&gt;, with appropriate permissions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;early-commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mkdir -p /etc/cryptsetup-keys.d&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dd if=/dev/urandom of=/etc/cryptsetup-keys.d/root.key bs=1024 count=4&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;chmod 600 /etc/cryptsetup-keys.d/root.key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Storage config: The disk was partitioned into:

&lt;ul&gt;
&lt;li&gt;An EFI system partition (&lt;code&gt;/boot/efi&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;An unencrypted &lt;code&gt;/boot&lt;/code&gt; partition&lt;/li&gt;
&lt;li&gt;A third partition for encrypted data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Create a LUKS-encrypted volume on the third partition using the key file. On top of that, create a LVM volume group (VolGroup) and allocate a root logical volume (lv_root).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ptable&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gpt&lt;/span&gt;
        &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;smallest&lt;/span&gt;
        &lt;span class="na"&gt;wipe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superblock-recursive&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
        &lt;span class="na"&gt;grub_device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disk-sda&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disk&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disk-sda&lt;/span&gt;
        &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;536870912&lt;/span&gt;  &lt;span class="c1"&gt;# 512MB&lt;/span&gt;
        &lt;span class="na"&gt;wipe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superblock&lt;/span&gt;
        &lt;span class="na"&gt;flag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;boot&lt;/span&gt;
        &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;grub_device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition-0&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fstype&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fat32&lt;/span&gt;
        &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition-0&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format-0&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/boot/efi&lt;/span&gt;
        &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format-0&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mount-0&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mount&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disk-sda&lt;/span&gt;
        &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1073741824&lt;/span&gt;  &lt;span class="c1"&gt;# 1GB&lt;/span&gt;
        &lt;span class="na"&gt;wipe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superblock&lt;/span&gt;
        &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;grub_device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition-1&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fstype&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ext4&lt;/span&gt;
        &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition-1&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format-1&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/boot&lt;/span&gt;
        &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format-1&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mount-1&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mount&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disk-sda&lt;/span&gt;
        &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
        &lt;span class="na"&gt;wipe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superblock&lt;/span&gt;
        &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;grub_device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition-2&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cryptlvm&lt;/span&gt;
        &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;partition-2&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;keyfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/etc/cryptsetup-keys.d/root.key&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dm_crypt-lvm&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dm_crypt&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;VolGroup&lt;/span&gt;
        &lt;span class="na"&gt;devices&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;dm_crypt-lvm&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lvm_volgroup-0&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lvm_volgroup&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lv_root&lt;/span&gt;
        &lt;span class="na"&gt;volgroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lvm_volgroup-0&lt;/span&gt;
        &lt;span class="na"&gt;size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;-1&lt;/span&gt;
        &lt;span class="na"&gt;wipe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;superblock&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lvm_partition-root&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lvm_partition&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;fstype&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ext4&lt;/span&gt;
        &lt;span class="na"&gt;volume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lvm_partition-root&lt;/span&gt;
        &lt;span class="na"&gt;preserve&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format-root&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
        &lt;span class="na"&gt;device&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;format-root&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mount-root&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mount&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Late Commands:

&lt;ul&gt;
&lt;li&gt;Extract the UUID of the LUKS volume and copy the key file into &lt;code&gt;/etc/cryptsetup-keys.d&lt;/code&gt; inside the target filesystem, renaming it to match the UUID.&lt;/li&gt;
&lt;li&gt;Update &lt;code&gt;/etc/crypttab&lt;/code&gt; to ensure the system knows how to unlock the encrypted volume during boot.&lt;/li&gt;
&lt;li&gt;Add the appropriate hook configuration for &lt;code&gt;cryptsetup-initramf&lt;/code&gt;s and rebuilt the &lt;code&gt;initramfs&lt;/code&gt; so the key file will be included and accessible early in the boot process.
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;late-commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mkdir -p /target/etc/cryptsetup-keys.d /target/etc/cryptsetup-initramfs&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;bash -c '&lt;/span&gt;
        &lt;span class="s"&gt;luks_uuid=$(blkid -t TYPE=crypto_LUKS -s UUID -o value);&lt;/span&gt;
        &lt;span class="s"&gt;cp /etc/cryptsetup-keys.d/root.key "/target/etc/cryptsetup-keys.d/${luks_uuid}.key";&lt;/span&gt;
        &lt;span class="s"&gt;chmod 600 "/target/etc/cryptsetup-keys.d/${luks_uuid}.key";&lt;/span&gt;
        &lt;span class="s"&gt;echo "dm_crypt-lvm UUID=${luks_uuid} /etc/cryptsetup-keys.d/${luks_uuid}.key luks" &amp;gt; /target/etc/crypttab;&lt;/span&gt;
        &lt;span class="s"&gt;echo "KEYFILE_PATTERN=/etc/cryptsetup-keys.d/*.key" &amp;gt;&amp;gt; /target/etc/cryptsetup-initramfs/conf-hook;&lt;/span&gt;
      &lt;span class="s"&gt;'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;curtin in-target --target=/target -- update-initramfs -c -k all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  ✅ Conclusion
&lt;/h2&gt;

&lt;p&gt;With this setup, we successfully created a secure, automated Ubuntu installation with full disk encryption using LUKS and LVM. The system boots without requiring a passphrase, thanks to the securely handled key file integrated into the &lt;code&gt;initramfs&lt;/code&gt;. &lt;br&gt;
This makes the setup highly suitable for hands-off provisioning in secure environments.&lt;/p&gt;

&lt;p&gt;Looking ahead, this foundation can be extended to further enhance security by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Binding the LUKS key to a TPM after installation to enable secure, passphrase-free unlocks based on hardware identity.&lt;/li&gt;
&lt;li&gt;Integrating with Secure Boot and measured boot for a trusted boot chain.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>security</category>
      <category>encryption</category>
    </item>
    <item>
      <title>Disk Encryption using LUKS and TPM2.0</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Sun, 25 May 2025 14:36:58 +0000</pubDate>
      <link>https://dev.to/achu1612/disk-encryption-using-luks-and-tpm20-19hb</link>
      <guid>https://dev.to/achu1612/disk-encryption-using-luks-and-tpm20-19hb</guid>
      <description>&lt;h3&gt;
  
  
  Introduction to LUKS
&lt;/h3&gt;

&lt;p&gt;Linux Unified Key Setup (LUKS) is a disk encryption specification that encrypts block devices, such as disk drives and removable storage media. LUKS offers Full Disk Encryption (FDE) and selective partition-based encryption.&lt;/p&gt;

&lt;p&gt;The system prompts you for a passphrase every time you boot the computer to unlock the encrypted disk. LUKS encrypted volumes can be automatically unlocked (without the need to provide a passphrase at boot), using a Trusted Platform Module 2.0 (TPM 2.0) policy&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction to TPM2.0
&lt;/h3&gt;

&lt;p&gt;The Trusted Platform Module 2.0 (TPM) is a hardware-based system security feature that securely stores passwords, certificates, and encryption keys to authenticate the platform. It is embedded in the server motherboard.&lt;/p&gt;

&lt;p&gt;TPM supports auto unlocking of the encrypted disk during system startup without requiring any user intervention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup Information
&lt;/h3&gt;

&lt;p&gt;This was performed on vanilla Ubuntu 24.04 Partition based encryption. Ubuntu was installed on a VM which was created using multipass on top of Hyper-V, with TPM and secure boot enabled for the VM.&lt;/p&gt;

&lt;p&gt;The steps should work on a bare-metal node as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@honeyglaze:~# &lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt;
6.8.0-60-generic

root@honeyglaze:~# &lt;span class="nb"&gt;cat&lt;/span&gt; /etc/os-release
&lt;span class="nv"&gt;PRETTY_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Ubuntu 24.04.2 LTS"&lt;/span&gt;
&lt;span class="nv"&gt;NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Ubuntu"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"24.04"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"24.04.2 LTS (Noble Numbat)"&lt;/span&gt;
&lt;span class="nv"&gt;VERSION_CODENAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noble
&lt;span class="nv"&gt;ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu
&lt;span class="nv"&gt;ID_LIKE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;debian
&lt;span class="nv"&gt;HOME_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://www.ubuntu.com/"&lt;/span&gt;
&lt;span class="nv"&gt;SUPPORT_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://help.ubuntu.com/"&lt;/span&gt;
&lt;span class="nv"&gt;BUG_REPORT_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://bugs.launchpad.net/ubuntu/"&lt;/span&gt;
&lt;span class="nv"&gt;PRIVACY_POLICY_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"&lt;/span&gt;
&lt;span class="nv"&gt;UBUNTU_CODENAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noble
&lt;span class="nv"&gt;LOGO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ubuntu-logo

&lt;span class="c"&gt;# Validating TPM device&lt;/span&gt;
root@honeyglaze:~# &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /dev/tpm&lt;span class="k"&gt;*&lt;/span&gt;
crw-rw---- 1 tss root  10,   224 May 22 05:49 /dev/tpm0
crw-rw---- 1 tss tss  253, 65536 May 22 05:49 /dev/tpmrm0

&lt;span class="c"&gt;# Validating secure boot&lt;/span&gt;
root@honeyglaze:~# mokutil &lt;span class="nt"&gt;--sb-state&lt;/span&gt;
SecureBoot enabled

&lt;span class="c"&gt;# cryptsetup version&lt;/span&gt;
root@honeyglaze:~# cryptsetup &lt;span class="nt"&gt;--version&lt;/span&gt;
cryptsetup 2.7.0 flags: UDEV BLKID KEYRING FIPS KERNEL_CAPI HW_OPAL

root@honeyglaze:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda       8:0    0   10G  0 disk                    &lt;span class="c"&gt;#### We will be working on /dev/sda disk&lt;/span&gt;
sdb       8:16   0   20G  0 disk
├─sdb1    8:17   0   19G  0 part /
├─sdb14   8:30   0    4M  0 part
├─sdb15   8:31   0  106M  0 part /boot/efi
└─sdb16 259:0    0  913M  0 part /boot
sr0      11:0    1   54K  0 rom
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Encrypting the disk
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use the &lt;code&gt;cryptsetup luksFormat&lt;/code&gt;  command to encrypt the disk. Provide a passphrase for encrypting the partition.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@honeyglaze:~# cryptsetup luksFormat /dev/sda

WARNING!
&lt;span class="o"&gt;========&lt;/span&gt;
This will overwrite data on /dev/sda irrevocably.

Are you sure? &lt;span class="o"&gt;(&lt;/span&gt;Type &lt;span class="s1"&gt;'yes'&lt;/span&gt; &lt;span class="k"&gt;in &lt;/span&gt;capital letters&lt;span class="o"&gt;)&lt;/span&gt;: YES
Enter passphrase &lt;span class="k"&gt;for&lt;/span&gt; /dev/sda:
Verify passphrase:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a logical device-mapper that can be mounted to the encrypted partition. Provide the same passphrase given in the previous step to unlock the encrypted drive.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@honeyglaze:~# cryptsetup luksOpen /dev/sda cryptpart
Enter passphrase &lt;span class="k"&gt;for&lt;/span&gt; /dev/sda:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Format the new partition with the ext4 file system.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@honeyglaze:~# mkfs.ext4 /dev/mapper/cryptpart
mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 2617344 4k blocks and 655360 inodes
Filesystem UUID: 73c76f23-804d-431d-84be-4767a6dc81c7
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information:  done
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Mount the new file system.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@honeyglaze:~# &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /mnt/cryptpart

root@honeyglaze:~# mount /dev/mapper/cryptpart /mnt/cryptpart/

root@honeyglaze:~# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   10G  0 disk
└─cryptpart 252:0    0   10G  0 crypt /mnt/cryptpart
sdb           8:16   0   20G  0 disk
├─sdb1        8:17   0   19G  0 part  /
├─sdb14       8:30   0    4M  0 part
├─sdb15       8:31   0  106M  0 part  /boot/efi
└─sdb16     259:0    0  913M  0 part  /boot
sr0          11:0    1   54K  0 rom
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Adding a recovery key.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate a key&lt;/span&gt;
root@honeyglaze:~# openssl rand &lt;span class="nt"&gt;-base64&lt;/span&gt; 32 | &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-c1-32&lt;/span&gt; | &lt;span class="nb"&gt;tee &lt;/span&gt;keyfile.txt
UXItVWYLicvja4u4PdYtMZ/iPwcufRGz

&lt;span class="c"&gt;# Use luksAddKey to add the generate key, enter the passphrase (given in step 1) when prompted.&lt;/span&gt;
root@honeyglaze:~# cryptsetup luksAddKey /dev/sda keyfile.txt
Enter any existing passphrase:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Validate the key slots.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Key slot 0 and 1 are for the primary passphrase and the recovery key&lt;/span&gt;

root@honeyglaze:~# cryptsetup luksDump /dev/sda
LUKS header information
Version:        2
Epoch:          4
Metadata area:  16384 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
Keyslots area:  16744448 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
UUID:           608d9da9-a3fa-4a4e-8923-c3e4d95247a3
Label:          &lt;span class="o"&gt;(&lt;/span&gt;no label&lt;span class="o"&gt;)&lt;/span&gt;
Subsystem:      &lt;span class="o"&gt;(&lt;/span&gt;no subsystem&lt;span class="o"&gt;)&lt;/span&gt;
Flags:          &lt;span class="o"&gt;(&lt;/span&gt;no flags&lt;span class="o"&gt;)&lt;/span&gt;

Data segments:
  0: crypt
        offset: 16777216 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
        length: &lt;span class="o"&gt;(&lt;/span&gt;whole device&lt;span class="o"&gt;)&lt;/span&gt;
        cipher: aes-xts-plain64
        sector: 4096 &lt;span class="o"&gt;[&lt;/span&gt;bytes]

Keyslots:
  0: luks2
        Key:        512 bits
        Priority:   normal
        Cipher:     aes-xts-plain64
        Cipher key: 512 bits
        PBKDF:      argon2id
        Time cost:  6
        Memory:     1048576
        Threads:    2
        Salt:       c4 ec 58 59 3a 34 48 bb 01 44 94 6f 7f 41 4d d9
                    a8 74 ba a6 1a 57 60 86 f1 19 69 e5 c2 d6 e7 30
        AF stripes: 4000
        AF &lt;span class="nb"&gt;hash&lt;/span&gt;:    sha256
        Area offset:32768 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
        Area length:258048 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
        Digest ID:  0
  1: luks2
        Key:        512 bits
        Priority:   normal
        Cipher:     aes-xts-plain64
        Cipher key: 512 bits
        PBKDF:      argon2id
        Time cost:  5
        Memory:     1048576
        Threads:    2
        Salt:       3e fb 84 89 4a c2 53 bb 4e d8 0f d1 aa a3 f1 81
                    e1 31 8a af 67 69 b8 96 c1 da ba a1 d7 db 2f 72
        AF stripes: 4000
        AF &lt;span class="nb"&gt;hash&lt;/span&gt;:    sha256
        Area offset:290816 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
        Area length:258048 &lt;span class="o"&gt;[&lt;/span&gt;bytes]
        Digest ID:  0
Tokens:
Digests:
  0: pbkdf2
        Hash:       sha256
        Iterations: 246375
        Salt:       2a 25 1d 0e b9 33 2b 8b b6 8f 28 &lt;span class="nb"&gt;fc &lt;/span&gt;71 4e 51 08
                    78 ba d8 36 45 21 13 b4 6e 66 0a a5 85 f1 a9 eb
        Digest:     2a d1 7b 8a e2 b2 3d f1 17 29 7a 16 54 1e 3a 67
                    a8 f8 f0 f7 79 c1 a3 06 2b bd bf 09 9d 3b 36 4d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Decrypt a LUKS volume with Clevis and TPM
&lt;/h3&gt;

&lt;p&gt;Clevis is a pluggable framework for automated decryption. It can be used to provide automated decryption of data or even automated unlocking of LUKS volumes. I went with clevis since in case of passphrase in TPM chip is corrupted/unreachable, clevis automatically prompts for passphrase/recovery key without requiring intervention to update &lt;code&gt;/etc/crypttab&lt;/code&gt;  to drop TPM device.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install the required packages.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;dracut-core clevis clevis-tpm2 clevis-luks clevis-dracut tss2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Bind LUKS encryption key/password to TPM2.0. Enter the passphrase when prompted.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# "-s 2" specifies the key slot. 0 and 1 slots are already in use.&lt;/span&gt;
root@honeyglaze:~# clevis luks &lt;span class="nb"&gt;bind&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; /dev/sdb tpm2 &lt;span class="s1"&gt;'{"pcr_bank":"sha256","pcr_ids":"7"}'&lt;/span&gt;
Enter existing LUKS password:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Validate the binding.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@honeyglaze:~# clevis luks list &lt;span class="nt"&gt;-d&lt;/span&gt; /dev/sda
2: tpm2 &lt;span class="s1"&gt;'{"hash":"sha256","key":"ecc","pcr_bank":"sha256","pcr_ids":"7"}'&lt;/span&gt;

root@honeyglaze:~# systemd-cryptenroll /dev/sda
SLOT TYPE
   0 password   &lt;span class="c"&gt;# passphrase&lt;/span&gt;
   1 password   &lt;span class="c"&gt;# recovery key&lt;/span&gt;
   2 other      &lt;span class="c"&gt;# clevis binding&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Let's try to unlock the drive and see whether we are getting the prompt to enter the password or not.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# First we need to unmount the disk&lt;/span&gt;
root@honeyglaze:~# umount /mnt/cryptpart

root@honeyglaze:~# cryptsetup luksClose cryptpart

root@honeyglaze:~# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda       8:0    0   10G  0 disk
sdb       8:16   0   20G  0 disk
├─sdb1    8:17   0   19G  0 part /
├─sdb14   8:30   0    4M  0 part
├─sdb15   8:31   0  106M  0 part /boot/efi
└─sdb16 259:0    0  913M  0 part /boot
sr0      11:0    1   54K  0 rom

&lt;span class="c"&gt;# unlock command should not give a prompt to enter the passphrase&lt;/span&gt;
root@honeyglaze:~# clevis luks unlock &lt;span class="nt"&gt;-d&lt;/span&gt; /dev/sda &lt;span class="nt"&gt;-n&lt;/span&gt; cryptpart

root@honeyglaze:~# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   10G  0 disk
└─cryptpart 252:0    0   10G  0 crypt
sdb           8:16   0   20G  0 disk
├─sdb1        8:17   0   19G  0 part  /
├─sdb14       8:30   0    4M  0 part
├─sdb15       8:31   0  106M  0 part  /boot/efi
└─sdb16     259:0    0  913M  0 part  /boot
sr0          11:0    1   54K  0 rom
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Enable automatic decryption during boot
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt; Add an entry to &lt;code&gt;/etc/crypttab&lt;/code&gt;  for decrypting the volume at boot time.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Use blkid to get the UUID of the encrypted drive&lt;/span&gt;
root@honeyglaze:~# blkid &lt;span class="nt"&gt;-t&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;crypto_LUKS
/dev/sda: &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"608d9da9-a3fa-4a4e-8923-c3e4d95247a3"&lt;/span&gt; &lt;span class="nv"&gt;TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"crypto_LUKS"&lt;/span&gt;

root@honeyglaze:~# &lt;span class="nb"&gt;cat&lt;/span&gt; /etc/crypttab
&lt;span class="c"&gt;# &amp;lt;target name&amp;gt; &amp;lt;source device&amp;gt;         &amp;lt;key file&amp;gt;      &amp;lt;options&amp;gt;&lt;/span&gt;
cryptpart    &lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;608d9da9-a3fa-4a4e-8923-c3e4d95247a3 none luks,discard,nofail
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Rebuild the initramfs  and reboot.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;root@honeyglaze:~# dracut &lt;span class="nt"&gt;-f&lt;/span&gt;

......
dracut[I]: &lt;span class="k"&gt;***&lt;/span&gt; Creating image file &lt;span class="s1"&gt;'/boot/initrd.img-6.8.0-60-generic'&lt;/span&gt; &lt;span class="k"&gt;***&lt;/span&gt;
dracut[I]: Using auto-determined compression method &lt;span class="s1"&gt;'pigz'&lt;/span&gt;
dracut[I]: &lt;span class="k"&gt;***&lt;/span&gt; Creating initramfs image file &lt;span class="s1"&gt;'/boot/initrd.img-6.8.0-60-generic'&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt; &lt;span class="k"&gt;***&lt;/span&gt;

root@honeyglaze:~# reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Validate boot logs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
root@honeyglaze:~# journalctl &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; boot.log

root@honeyglaze:~# &lt;span class="nb"&gt;cat &lt;/span&gt;boot.log | &lt;span class="nb"&gt;grep &lt;/span&gt;tpm
May 22 08:10:24 localhost kernel: tpm_crb VTPM0101:00: &lt;span class="o"&gt;[&lt;/span&gt;Firmware Bug]: Bad ACPI memory layout
May 22 08:10:24 localhost kernel: tpm_crb VTPM0101:00: &lt;span class="o"&gt;[&lt;/span&gt;Firmware Bug]: Bad ACPI memory layout
May 22 08:10:26 honeyglaze systemd[1]: systemd-tpm2-setup-early.service - TPM2 SRK Setup &lt;span class="o"&gt;(&lt;/span&gt;Early&lt;span class="o"&gt;)&lt;/span&gt; was skipped because of an unmet condition check &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;ConditionSecurity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;measured-uki&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
May 22 08:10:26 honeyglaze systemd[1]: systemd-tpm2-setup.service - TPM2 SRK Setup was skipped because of an unmet condition check &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;ConditionSecurity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;measured-uki&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
May 22 08:10:29 honeyglaze systemd[1]: systemd-tpm2-setup-early.service - TPM2 SRK Setup &lt;span class="o"&gt;(&lt;/span&gt;Early&lt;span class="o"&gt;)&lt;/span&gt; was skipped because of an unmet condition check &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;ConditionSecurity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;measured-uki&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
May 22 08:10:29 honeyglaze systemd[1]: systemd-tpm2-setup.service - TPM2 SRK Setup was skipped because of an unmet condition check &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;ConditionSecurity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;measured-uki&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
May 22 08:10:32 honeyglaze systemd[1]: tpm-udev.path - Handle dynamically added tpm devices was skipped because of an unmet condition check &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;ConditionVirtualization&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;container&lt;span class="o"&gt;)&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

root@honeyglaze:~# &lt;span class="nb"&gt;cat &lt;/span&gt;boot.log | &lt;span class="nb"&gt;grep &lt;/span&gt;clevis
May 22 08:10:24 localhost systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
May 22 08:10:26 localhost systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
May 22 08:10:26 localhost systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
May 22 08:10:24 localhost systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
May 22 08:10:26 localhost systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
May 22 08:10:26 localhost systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
May 22 08:10:26 honeyglaze systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
May 22 08:10:29 honeyglaze systemd[1]: Started clevis-luks-askpass.service - Forward Password Requests to Clevis.
May 22 08:10:29 honeyglaze clevis-luks-askpass[809]: Unlocked /dev/disk/by-uuid/608d9da9-a3fa-4a4e-8923-c3e4d95247a3 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;UUID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;608d9da9-a3fa-4a4e-8923-c3e4d95247a3&lt;span class="o"&gt;)&lt;/span&gt; successfully
May 22 08:10:36 honeyglaze systemd[1]: clevis-luks-askpass.service: Deactivated successfully.
May 22 08:10:36 honeyglaze systemd[1]: clevis-luks-askpass.service: Consumed 1.283s CPU time.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bonus Info
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;TPM is a hardware chip that can securely store keys and perform cryptographic operations.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;pcr_id&lt;/code&gt; refers to the Platform Configuration Register (PCR) index that is used by the TPM (Trusted Platform Module) to store measurements of system state during the boot process. These registers in TPM (numbered 0–23 for TPM 2.0) that store hashes of system components during boot. These values change if the boot environment changes, which makes them great for detecting tampering.&lt;/li&gt;
&lt;li&gt;When you bind a LUK S volume to TPM2  using Clevis , you specify one or more &lt;code&gt;pcr_ids&lt;/code&gt; —for example, with &lt;code&gt;clevis luks bind -d /dev/sdX tpm2 '{"pcr_ids":"0,7"}'&lt;/code&gt;. This command seals the decryption key to the current values of PCRs 0 and 7, which represent specific measurements of the system state (like firmware, bootloader, and kernel). If these PCR values change—due to modifications in boot parameters, kernel, initramfs, or the bootloader—the TPM will not unseal the key, and the system will prompt for the LUKS passphrase. This mechanism ensures that the disk can only be automatically decrypted in a known, trusted, and unmodified boot environment.&lt;/li&gt;
&lt;li&gt;Execute &lt;code&gt;sudo tpm2_pcrread sha256:0,1,2,3,4,5,6,7&lt;/code&gt; to display the hashes. Reboot the validate whether these hash values are changing.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>security</category>
    </item>
    <item>
      <title>Introducing inmem – Lightweight Go Cache Engine with Built-in Sharding, Transaction, and Eviction</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Wed, 23 Apr 2025 09:16:12 +0000</pubDate>
      <link>https://dev.to/achu1612/introducing-inmem-lightweight-go-cache-engine-with-built-in-sharding-transaction-and-eviction-42f3</link>
      <guid>https://dev.to/achu1612/introducing-inmem-lightweight-go-cache-engine-with-built-in-sharding-transaction-and-eviction-42f3</guid>
      <description>&lt;p&gt;After spending way too much time, I went with a totally original, never-before-seen name for an in-memory cache: &lt;strong&gt;inmem&lt;/strong&gt;. Creative, right?&lt;br&gt;
Jokes aside, it all started as a side experiment and went through more refactor cycles than I care to admit. But now, it’s grown into something clean, fast, and actually useful.  Introducing &lt;a href="https://github.com/achu-1612/inmem" rel="noopener noreferrer"&gt;inmem&lt;/a&gt;— an embedded caching library written in pure Go, designed with simplicity and performance in mind. It supports &lt;strong&gt;eviction policies&lt;/strong&gt;, &lt;strong&gt;sharding&lt;/strong&gt; for concurrency, and &lt;strong&gt;transactions&lt;/strong&gt; for atomic operations.&lt;/p&gt;
&lt;h2&gt;
  
  
  So, what is inmem?
&lt;/h2&gt;

&lt;p&gt;It is a fast, embedded in-memory caching library written in Go. It's like a mini key-value store, living happily inside your app, keeping your data hot and your latency low.&lt;/p&gt;
&lt;h2&gt;
  
  
  💡 Why I built it?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I was bored&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  ⚙️ Key Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Eviction Support&lt;/strong&gt;:
Keep memory usage in check with support of an optimized TTL-based eviction. It also supports an efficient LRU and LFU. ARC (Adaptive Replacement Cache) is next in line!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sharding&lt;/strong&gt;: 
Out of the box sharding support for not overloading a single map.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transactions&lt;/strong&gt;: 
Atomic read/write/delete operations across multiple keys. No need to reinvent locking or worry about inconsistent state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thread-safe&lt;/strong&gt;:
Go nuts with goroutines — it’s internal locking is shard-aware and handles the dirty work for you.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Standalone eviction cache&lt;/strong&gt;: 
Use the eviction cache (LRU or LFU) independently.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistence&lt;/strong&gt;:
Periodically save cache data to disk and load it on startup.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  🚀 Quick Start
&lt;/h2&gt;

&lt;p&gt;Install is the usual way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go get github.com/achu-1612/inmem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧪 Usage
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Basic Usage
&lt;/h4&gt;

&lt;p&gt;A simple example showing how to store and retrieve a custom struct (User) from the cache. Don't forget to &lt;code&gt;gob.Register&lt;/code&gt; your types if you're storing structs!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;
    &lt;span class="s"&gt;"fmt"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/achu-1612/inmem"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
    &lt;span class="n"&gt;Age&lt;/span&gt;  &lt;span class="kt"&gt;int&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;gob&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Register&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Options&lt;/span&gt;&lt;span class="p"&gt;{})&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Achu"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Age&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;25&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"key"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;found&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Found value:"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;fmt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Println&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Value not found"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Sharded Cache
&lt;/h3&gt;

&lt;p&gt;Enable sharding to reduce lock contention and improve performance under concurrent access. You can configure the number of shards as needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Options&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Sharding&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;ShardCount&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Transactions
&lt;/h3&gt;

&lt;p&gt;Enable transactional support to perform atomic operations (like multi-key updates/deletes) safely. Currently supports optimistic and atomic transactions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Options&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;TransactionType&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TransactionTypeOptimistic&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Persistence
&lt;/h3&gt;

&lt;p&gt;Enable persistence to periodically write cache data to disk. Useful if you want the cache to survive restarts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Options&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Sync&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;           &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;SyncInterval&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Minute&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;SyncFolderPath&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"cache_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Eviction
&lt;/h3&gt;

&lt;p&gt;Configure the cache to automatically evict entries based on policy (e.g., LRU) when the size limit is reached.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Background&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;inmem&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Options&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;EvictionPolicy&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;eviction&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PolicyLRU&lt;/span&gt;&lt;span class="p"&gt;,,&lt;/span&gt;
        &lt;span class="n"&gt;MaxSize&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;panic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"key1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"key2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"key2"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;cache&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"key3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"value"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// key1 will be evicted as key2 has access frequency as 2 and key1 has 1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Standalone Eviction cache
&lt;/h3&gt;

&lt;p&gt;You can also use the underlying eviction engine independently if all you need is a simple key-value store with LRU or LFU eviction.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;    &lt;span class="n"&gt;cache&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;eviction&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;New&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;eviction&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Options&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Capacity&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Policy&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;   &lt;span class="n"&gt;eviction&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;PolicyLRU&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="c"&gt;// Policy:  eviction.PolicyLFU,&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🎯 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;It all started as a simple experiment, but it’s grown into something I'm genuinely excited to share. Whether you're building a blazing-fast API, a CLI tool, or just need a clean way to manage in-memory data — I hope &lt;code&gt;inmem&lt;/code&gt; makes your life a bit easier (and faster).&lt;/p&gt;

&lt;p&gt;If you give it a try and something breaks — that’s probably my fault.&lt;br&gt;
If you try it and it works flawlessly — tell everyone it was intentional.&lt;/p&gt;

&lt;p&gt;The project is open-source, and I’d love your feedback, suggestions, or contributions. Star it, fork it, break it, fix it — all welcome!&lt;/p&gt;

&lt;h2&gt;
  
  
  🙌 Thanks for Reading!
&lt;/h2&gt;

&lt;p&gt;If you made it this far, you either really like caching or you're just incredibly patient. Either way, thanks for stopping by!&lt;/p&gt;

&lt;p&gt;Have a good day.&lt;/p&gt;

</description>
      <category>go</category>
      <category>cache</category>
    </item>
    <item>
      <title>Fluent-bit as a sidecar in Pod</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Mon, 20 Dec 2021 14:03:10 +0000</pubDate>
      <link>https://dev.to/achu1612/fluent-bit-as-a-sidecar-in-pod-1479</link>
      <guid>https://dev.to/achu1612/fluent-bit-as-a-sidecar-in-pod-1479</guid>
      <description>&lt;p&gt;Most modern applications have some kind of logging mechanism. Likewise, container engines are designed to support logging. The easiest and most adopted logging method for containerized applications is writing to standard output and standard error streams. However, the native functionality provided by a container engine or runtime is usually not enough for a complete logging solution.&lt;/p&gt;

&lt;p&gt;Logging architectures require a separate backend to store, analyze, and query logs. Kubernetes does not provide a native storage solution for log data. Instead, many logging solutions integrate with Kubernetes. &lt;/p&gt;

&lt;p&gt;In this article, the goal is to collect standard output logs and ship them to a centralized store. The log collector can be configured as a sidecar in the pod or as a daemonset in the cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Architecture:
&lt;/h3&gt;

&lt;p&gt;Fluent Bit is used to collect and ship the standard output logs from the pod. It is an open-source Log Processor and Shipper, designed with performance in mind: high throughput with low CPU and Memory usage. Influx DB, an open-source time-series database, is used as the centralized store for the logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhszobu1bldmv4xwu9sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhszobu1bldmv4xwu9sr.png" alt="Fluent-bit collecting stdout logs and shipping to Influx DB" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The setup:
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Wait! How to access the stdout logs from a pod?
&lt;/h4&gt;

&lt;p&gt;The standard output logs are written to &lt;code&gt;/var/log/pods/{NAMESPACE}_{POD_NAME}_{POD_ID}/{CONTAINER_NAME}/*.log&lt;/code&gt; files . We just need to tail all the log files present in the above location.&lt;/p&gt;

&lt;h4&gt;
  
  
  Fluent Bit Configuration
&lt;/h4&gt;

&lt;p&gt;A configmap is created to hold the fluent bit config template.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config-template&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;fluent-bit-template.conf&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;[INPUT]&lt;/span&gt;
      &lt;span class="s"&gt;Name tail&lt;/span&gt;
      &lt;span class="s"&gt;Path /var/log/pods/${NAMESPACE}_${POD_NAME}_${POD_ID}/${CONTAINER_NAME}/*.log&lt;/span&gt;
    &lt;span class="s"&gt;[OUTPUT]&lt;/span&gt;
      &lt;span class="s"&gt;Name          influxdb&lt;/span&gt;
      &lt;span class="s"&gt;Match         *&lt;/span&gt;
      &lt;span class="s"&gt;Host          ${INFLUX_HOST}&lt;/span&gt;
      &lt;span class="s"&gt;Port          ${INFLUX_PORT}&lt;/span&gt;
      &lt;span class="s"&gt;Database      ${INFLUX_DB}&lt;/span&gt;
      &lt;span class="s"&gt;Sequence_Tag  _seq&lt;/span&gt;
      &lt;span class="s"&gt;http_token    ${INFLUX_TOKEN}&lt;/span&gt;
      &lt;span class="s"&gt;Bucket        ${INFLUX_BUCKET}&lt;/span&gt;
      &lt;span class="s"&gt;Org           ${INFLUX_ORG}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Fluent Bit connection to InfluxDB
&lt;/h4&gt;

&lt;p&gt;A secret object is created containing access configuration and credentials for InfluxDB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;influx-db-cred&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;INFLUX_HOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;&amp;lt;TO_BE_UPDATED&amp;gt;&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;INFLUX_PORT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;&amp;lt;TO_BE_UPDATED&amp;gt;&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;INFLUX_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;&amp;lt;TO_BE_UPDATED&amp;gt;&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;INFLUX_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;&amp;lt;TO_BE_UPDATED&amp;gt;&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;INFLUX_BUCKET&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;&amp;lt;TO_BE_UPDATED&amp;gt;&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;INFLUX_ORG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;&amp;lt;TO_BE_UPDATED&amp;gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Pod configuration for running Fluent Bit as a sidecar
&lt;/h4&gt;

&lt;p&gt;Volumes are used to hold the fluent-bit config file template, the actual template file. Pod logs present in the host are mounted to the sidecar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config-template&lt;/span&gt;
    &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config-template&lt;/span&gt;
      &lt;span class="na"&gt;defaultMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0777&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-log&lt;/span&gt;
    &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log/pods&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Directory&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
    &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A valid fluent-bit config file is generated from the defined template. All the required details and credentials are injected into the init-container, which will prepare the fluent-bit config file from the template. Using K8s Downward API, Pod information are exposed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config-manager&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox:latest&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/fluent-bit-template.conf&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config-template&lt;/span&gt;
        &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-template.conf&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/conf&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
    &lt;span class="na"&gt;envFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;secretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;influx-db-cred&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POD_ID&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fieldRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.uid&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POD_NAME&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fieldRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.name&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NAMESPACE&lt;/span&gt;
      &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;fieldRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;fieldPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;metadata.namespace&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CONTAINER_NAME&lt;/span&gt;
      &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*container1&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/bin/sh"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;eval "echo \"$(cat /fluent-bit-template.conf)\"" &amp;gt; /conf/fluent-bit.conf&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fluent-bit sidecar container requires the access to the log files created by pods and the generated configuration file. Both of which are mounted to the sidecar using volumes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent/fluent-bit:1.8&lt;/span&gt;
    &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/fluent-bit/etc/fluent-bit.conf&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit-config&lt;/span&gt;
        &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fluent-bit.conf&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/log/pods&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-log&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Visualizing the logs in InfluxDB dashboard
&lt;/h3&gt;

&lt;p&gt;Running a query on the configured bucket will return a result set containing all the logs shipped by the sidecar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ccys6l7gekezefs260b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ccys6l7gekezefs260b.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Explore different input and output plugins supported by Fluent Bit.&lt;/li&gt;
&lt;li&gt;Use parser and filter to format the logs&lt;/li&gt;
&lt;li&gt;Build your own fluent-bit plugin&lt;/li&gt;
&lt;li&gt;Run Fluent Bit as a deamonset in the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/aka-achu/fluentbit-k8s" rel="noopener noreferrer"&gt;Here&lt;/a&gt; are all the manifests used for the setup.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>logging</category>
      <category>fluent</category>
      <category>devops</category>
    </item>
    <item>
      <title>CI/CD for Kubernetes using GitHub Actions, and Keel</title>
      <dc:creator>Achyuta Das</dc:creator>
      <pubDate>Tue, 13 Apr 2021 12:38:21 +0000</pubDate>
      <link>https://dev.to/achu1612/ci-cd-for-kubernetes-using-github-actions-and-keel-4b7c</link>
      <guid>https://dev.to/achu1612/ci-cd-for-kubernetes-using-github-actions-and-keel-4b7c</guid>
      <description>&lt;p&gt;In this article, the goal is to show how to set up a containerized application in Kubernetes with a very simple CI/CD pipeline to manage deployment using GitHub Actions and Keel.&lt;/p&gt;

&lt;h3&gt;
  
  
  Before we start:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;, also known as K8s, is an open-source container orchestration system for automating deployment, scaling, and management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://keel.sh/" rel="noopener noreferrer"&gt;Keel&lt;/a&gt; is a K8s operator to automate Helm, DaemonSet, Stateful &amp;amp; Deployment updates. It’s open-source, self-hosted with zero requirements of CLI/API, and comes with a beautiful and insightful dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/features/actions/" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; makes it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhjt07ov8xmnrp9obtvo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbhjt07ov8xmnrp9obtvo.png" alt="Alt Text" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are basically two steps in the workflow.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You will push some changes to the GitHub repo. A workflow will be triggered, which will build the docker image of our application and push the image to the Docker registry.&lt;/li&gt;
&lt;li&gt;Keel will get notified of the updated image. Based on the update policy, the deployment will be updated in the configured cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Step 1:
&lt;/h4&gt;

&lt;p&gt;In the first step, we will prepare the GitHub repo to trigger workflows.&lt;/p&gt;

&lt;p&gt;The repo, &lt;a href="https://github.com/aka-achu/go-kube" rel="noopener noreferrer"&gt;aka-achu/go-kube&lt;/a&gt; contains a simple web application written in golang and a Dockerfile, which will be used to build a docker image of the application. You can maintain any number of environments for your application like Production, QnA, Staging, Development, etc. For sake of simplicity, we will be maintaining only two deployment environments of the application.&lt;br&gt;
There are only two branches in the repo.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;main branch (for Production environment)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Stable Build&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*.*.*"&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set tag in env&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "TAG=${GITHUB_REF#refs/*/}" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runq/go-kube:${{ env.TAG }}, runq/go-kube:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;a href="https://github.com/aka-achu/go-kube/blob/main/.github/workflows/tag_build.yml" rel="noopener noreferrer"&gt;stable workflow&lt;/a&gt; will be triggered when a tag is pushed to GitHub. In the workflow, the tag associated with the commit will be used as the docker image tag.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;dev branch (for Development/Staging environment)
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Development Build&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;dev&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set short commit hash in env&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "COMMIT_SHA=$(echo $GITHUB_SHA | cut -c1-7)" &amp;gt;&amp;gt; $GITHUB_ENV&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;runq/go-kube:dev-${{ env.COMMIT_SHA }}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The &lt;a href="https://github.com/aka-achu/go-kube/blob/main/.github/workflows/dev_build.yml" rel="noopener noreferrer"&gt;dev workflow&lt;/a&gt; will be triggered when any changes are pushed to the dev branch. For development builds, instead of git tags, we will use the short commit hash of length 7, which we often see in GitHub. The docker image tag of the development build image will be &lt;em&gt;dev-SHORT_COMMIT_SHA&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Both workflows aim to integrate the code, maybe run some tests, build the docker image, and update the image registry. Till this point, we have completed Continuous Integration and Continuous Delivery.&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2:
&lt;/h4&gt;

&lt;p&gt;In this step, we will automate our deployment update. We will be using the K8s LoadBalancer service in our workflow. So, if you're using an on-premise cluster then you can use &lt;a href="https://metallb.universe.tf/" rel="noopener noreferrer"&gt;MetalLB&lt;/a&gt;, which is a load-balancer implementation for bare metal Kubernetes clusters.&lt;/p&gt;
&lt;h5&gt;
  
  
  Install keel:
&lt;/h5&gt;

&lt;p&gt;Keel doesn't need a database. Keel doesn't need any persistent disk. It gets all required information from your cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://sunstone.dev/keel?namespace&lt;span class="o"&gt;=&lt;/span&gt;keel&amp;amp;username&lt;span class="o"&gt;=&lt;/span&gt;admin&amp;amp;password&lt;span class="o"&gt;=&lt;/span&gt;admin&amp;amp;tag&lt;span class="o"&gt;=&lt;/span&gt;latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above command will deploy Keel to keel namespace with basic authentication enabled and admin dashboard. You can provide an admin password while applying the manifest or you can download the &lt;a href="https://sunstone.dev/keel?namespace=keel&amp;amp;username=admin&amp;amp;password=admin&amp;amp;tag=latest" rel="noopener noreferrer"&gt;manifest&lt;/a&gt; and replace the default password.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;            &lt;span class="c1"&gt;# Basic auth (to enable UI/API)&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BASIC_AUTH_USER&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BASIC_AUTH_PASSWORD&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AUTHENTICATED_WEBHOOKS&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;false"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Keel policies:
&lt;/h5&gt;

&lt;p&gt;In keel, we use policies to define when we want our application/deployment to get updated. Following &lt;a href="https://semver.org/" rel="noopener noreferrer"&gt;semver&lt;/a&gt; best practices allows you to safely automate application updates. Keel supports many different &lt;a href="https://keel.sh/docs/#policies" rel="noopener noreferrer"&gt;policies&lt;/a&gt; to update resources. For now, we will use only &lt;code&gt;all&lt;/code&gt; and &lt;code&gt;glob&lt;/code&gt; policies to update our deployment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;all: update whenever there is a version bump (1.0.0 -&amp;gt; 1.0.1) or a new prerelease created (ie: 1.0.0 -&amp;gt; 1.0.1-rc1)&lt;/li&gt;
&lt;li&gt;glob: use wildcards to match versions (eg: &lt;code&gt;dev-*&lt;/code&gt; in our scenario)&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  How Keel will get notified:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Webhooks- when a new image is pushed to the registry, keel will get notified by the added webhook in DockerHub, and based on the configured update policy, the deployment will be updated.&lt;/li&gt;
&lt;li&gt;Polling- when an image with a non-semver style tag is used (ie: latest) Keel will monitor SHA digest. If a tag is semver - it will track and notify providers when new versions are available.&lt;/li&gt;
&lt;/ul&gt;

&lt;h5&gt;
  
  
  Configuring webhook:
&lt;/h5&gt;

&lt;p&gt;First, we need to get the External IP address of the keel service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; keel
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will something like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz68tmgm7qqp9c2o5lz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvz68tmgm7qqp9c2o5lz2.png" alt="Alt Text" width="786" height="212"&gt;&lt;/a&gt;&lt;br&gt;
Now, that we have the external address of the &lt;code&gt;service/keel&lt;/code&gt;, we will add a webhook for our repository in DockerHub. The URL for the hook will be &lt;code&gt;http://&amp;lt;External-IP&amp;gt;:9300/v1/webhooks/dockerhub&lt;/code&gt;. Now pushing a new docker image will trigger an HTTP call-back.&lt;/p&gt;

&lt;p&gt;If you don't want to expose your Keel service - the recommended solution is &lt;a href="https://webhookrelay.com/" rel="noopener noreferrer"&gt;webhookrelay&lt;/a&gt; which can deliver webhooks to your internal Keel service through a sidecar container. &lt;a href="https://keel.sh/docs/#receiving-webhooks-without-public-endpoint" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is how you can set up the sidecar.&lt;/p&gt;

&lt;h5&gt;
  
  
  Configuring deployment manifest:
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;To configure our staging deployment, which runs an image having a tag of &lt;code&gt;dev-SHORT_COMMIT_SHA&lt;/code&gt;, we will use the &lt;code&gt;glob&lt;/code&gt; policy. We will specify our policy/update rule using annotations under the metadata of the deployment manifest. Like-
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;keel.sh/policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;glob:dev-*"&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;To configure our production deployment, which runs an image having a semver tag, we will use the &lt;code&gt;all&lt;/code&gt; policy. This will update our production deployment when it encounters any version bump in the tag. We will specify our policy/update rule using annotations under the metadata of the deployment manifest. Like-
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;...&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;keel.sh/policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
&lt;span class="nn"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Testing the workflow:
&lt;/h4&gt;

&lt;p&gt;Now, that we are done with the setup, a push to the &lt;code&gt;dev&lt;/code&gt; branch, will update the staging deployment and a tagged release will update the production deployment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push changes to the &lt;code&gt;dev&lt;/code&gt; branch&lt;/li&gt;
&lt;li&gt;Development workflow will be triggered which will build a docker image&lt;/li&gt;
&lt;li&gt;A Docker image with the tag &lt;code&gt;dev-SHORT_COMMIT_SHA&lt;/code&gt; will be pushed to the DockerHub registry.&lt;/li&gt;
&lt;li&gt;Keel will get notified by DockerHub using the webhook&lt;/li&gt;
&lt;li&gt;Keel will validate, whether the new image tag satisfies the policy we have specified (all tags starting with &lt;code&gt;dev-&lt;/code&gt; will qualify for the update process. eg: &lt;code&gt;dev-c722d00&lt;/code&gt;, &lt;code&gt;dev-0ca740e&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;If the tag qualifies for the update, keel will create a new replicaset with the new image. Once the pods are ready, keel will scale the number of replicas in the old replicaset to 0. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The release of a tag will trigger a similar flow of events.&lt;/p&gt;

&lt;h4&gt;
  
  
  Visualizing using Keel dashboard:
&lt;/h4&gt;

&lt;p&gt;You can access the dashboard at External-IP:9300 or by using the NodePort. Use the same credentials which you had set while setting up the keel.&lt;br&gt;
You can -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;View resources managed by keel&lt;/li&gt;
&lt;li&gt;Get an audit of the changes done by keel&lt;/li&gt;
&lt;li&gt;Change/Pause update policies of resources&lt;/li&gt;
&lt;li&gt;Approve updates&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What's next?
&lt;/h3&gt;

&lt;p&gt;Check out the keel documentation to explore more features.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enabling approvals for udpates&lt;/li&gt;
&lt;li&gt;Setting up notification pipelines&lt;/li&gt;
&lt;li&gt;Supported webhook triggers and polling&lt;/li&gt;
&lt;li&gt;Use helm templating for update&lt;/li&gt;
&lt;li&gt;Updating DaemonSets, StatefulSets, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/aka-achu/go-kube" rel="noopener noreferrer"&gt;Here&lt;/a&gt; are all the workflow and deployment manifests used.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>go</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
