<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Artur Bartosik</title>
    <description>The latest articles on DEV Community by Artur Bartosik (@luafanti).</description>
    <link>https://dev.to/luafanti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luafanti"/>
    <language>en</language>
    <item>
      <title>Exposing public applications on AWS EKS with Traefik</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Fri, 12 Jan 2024 07:19:13 +0000</pubDate>
      <link>https://dev.to/luafanti/exposing-public-applications-on-aws-eks-with-traefik-1p1c</link>
      <guid>https://dev.to/luafanti/exposing-public-applications-on-aws-eks-with-traefik-1p1c</guid>
      <description>&lt;p&gt;In this short guide, I'll walk you through the process of exposing applications on AWS EKS using Traefik. Despite Traefik often being perceived as having complex configurations and unclear documentation, I aim to equip you with a straightforward setup that you can easily apply to your solution. Let's streamline the process and make exposing applications on the cloud a hassle-free experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;preconfigure EKS Cluster with &lt;a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller"&gt;AWS Load Balancer Controller&lt;/a&gt; and &lt;a href="https://github.com/kubernetes-sigs/external-dns"&gt;External DNS&lt;/a&gt; installed - you can leverage my repo that will help you set up EKS cluster with &lt;code&gt;eksctl&lt;/code&gt; - &lt;a href="https://github.com/luafanti/eksctl-labs-cluster"&gt;GitHub repo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Route53 domain with ACM Certificate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As always I encourage you to install &lt;a href="https://github.com/ahmetb/kubectx"&gt;kubectx + kubens&lt;/a&gt; and configure shorter aliases to navigate Kubernetes easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install &amp;amp; Configure Traefik
&lt;/h2&gt;

&lt;p&gt;All resources you can find in my &lt;a href="https://github.com/luafanti/eks-traefik"&gt;GitHub repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, check if the Ingress Class has been added with installation of AWS Load Balancer Controller from my &lt;a href="https://github.com/luafanti/eksctl-labs-cluster"&gt;EKS setup repo&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ingressclas

&lt;span class="nt"&gt;-------------------------&lt;/span&gt;
NAME      CONTROLLER         
alb       ingress.k8s.aws/alb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use this Ingress Controller to provision the main NLB pointing to our Traefik proxy.&lt;/p&gt;

&lt;p&gt;Before we install the official &lt;a href="https://github.com/traefik/traefik-helm-chart"&gt;Traefik helm chart&lt;/a&gt; let's take a few minutes to look at the helm values file &lt;code&gt;helm/traefik.yaml&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#1&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;websecure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8443&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;exposedPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;traefik&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9000&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;exposedPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;9000&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;exposedPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5432&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
  &lt;span class="na"&gt;rabbitmq&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5672&lt;/span&gt;
    &lt;span class="na"&gt;expose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;exposedPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5672&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;

&lt;span class="c1"&gt;#2&lt;/span&gt;
&lt;span class="na"&gt;ingressRoute&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;dashboard&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;#3&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;external-dns.alpha.kubernetes.io/hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik.&amp;lt;YOUR_DOMAIN&amp;gt;, rabbitmq.&amp;lt;YOUR_DOMAIN&amp;gt;, postgres.&amp;lt;YOUR_DOMAIN&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;#4&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;external&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-ssl-ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;443,&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;9000"&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-nlb-target-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ip&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-scheme&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;internet-facing&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/aws-load-balancer-ssl-cert&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;YOUR_CERTIFICATE_ARN&amp;gt;&lt;/span&gt; &lt;span class="c1"&gt;#5&lt;/span&gt;
    &lt;span class="na"&gt;service.beta.kubernetes.io/load-balancer-source-ranges&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0/0&lt;/span&gt;
&lt;span class="na"&gt;globalArguments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--api.insecure=false"&lt;/span&gt;

&lt;span class="c1"&gt;#6&lt;/span&gt;
&lt;span class="na"&gt;logs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;general&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;INFO&lt;/span&gt;
  &lt;span class="na"&gt;access&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the top we define the Trafik port &lt;strong&gt;(#1)&lt;/strong&gt;. There are the network entry points into Traefik. We see dedicated ports defined for PostgreSQL and AMQP for RabbitMQ. The &lt;code&gt;web&lt;/code&gt; port responsible for HTTP has been disabled and we have only left the HTTPS &lt;code&gt;websecure&lt;/code&gt; open. Next, we disable default Ingress Route for Traefik dashboard &lt;strong&gt;(#2)&lt;/strong&gt;. Later we will create custom route for it. At the Traefik &lt;code&gt;service&lt;/code&gt; section &lt;strong&gt;(#3)&lt;/strong&gt; we define specific annotations for External DNS and AWS Load Balancer Controller. Here you need to know that by using these annotations you configure the AWS NLB and DNS records for Route53. Adjust &lt;code&gt;external-dns.alpha.kubernetes.io/hostname&lt;/code&gt; with your root domain &lt;strong&gt;(#4)&lt;/strong&gt; and  &lt;code&gt;service.beta.kubernetes.io/aws-load-balancer-ssl-cert&lt;/code&gt; with ACM certificate ARN &lt;strong&gt;(#5).&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;For more information please refer to the documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes-sigs.github.io/external-dns/v0.14.0/annotations/annotations/"&gt;AWS Load Balancer Controller docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/service/annotations/"&gt;External DNS docs&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The logging section &lt;strong&gt;(#6)&lt;/strong&gt; is obvious. I advise setting log level to &lt;code&gt;debug&lt;/code&gt; during the first setup as the info level is not too verbose.&lt;/p&gt;

&lt;p&gt;So now, after adjustment in &lt;code&gt;helm/traefik.yaml&lt;/code&gt; you are ready to install Traefik:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add traefik https://helm.traefik.io/traefik
helm &lt;span class="nb"&gt;install &lt;/span&gt;traefik traefik/traefik &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;network &lt;span class="nt"&gt;--values&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;helm/traefik.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right after, you should see NLB being created with the following list of listeners.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c1Rt5vtS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nkuffffu4jtvx9hsla6o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c1Rt5vtS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nkuffffu4jtvx9hsla6o.png" alt="AWS NLB Listeners" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DNS records should also appear in the Route53 console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5LUQX45S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ws303y3pao912u5fbvcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5LUQX45S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ws303y3pao912u5fbvcs.png" alt="Route53 DNS records" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If something doesn't happen, look for the answer in logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; aws-load-balancer-controller- &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; external- &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system
kubectl logs &lt;span class="nt"&gt;-f&lt;/span&gt; traefik- &lt;span class="nt"&gt;-n&lt;/span&gt; network
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step will be adding Traefik dashboard route.&lt;/p&gt;

&lt;p&gt;With the installation of Traefik, its &lt;a href="https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/"&gt;CRD's&lt;/a&gt; are installed. Check our first router definitions in &lt;code&gt;network/traefik-dashboard-route.yaml&lt;/code&gt; . First, we add a k8s secret &lt;strong&gt;(#1)&lt;/strong&gt; that will hold the credentials for the Basic Auth, then we define the BA as Traefik &lt;code&gt;Middleware&lt;/code&gt; &lt;strong&gt;(#2)&lt;/strong&gt; and finally we create an &lt;code&gt;IngressRoute&lt;/code&gt; which will be our routing path &lt;strong&gt;(#3)&lt;/strong&gt;. Remember to change your domain in the &lt;code&gt;Host&lt;/code&gt; rule.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#1&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dashboard-basic-auth-creds&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;network&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/basic-auth&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;#2&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik.containo.us/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Middleware&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dashboard-basic-auth&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;network&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;basicAuth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;secret&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dashboard-basic-auth-creds&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="c1"&gt;#3&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;traefik.containo.us/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IngressRoute&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dashboard&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;network&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;entryPoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;traefik&lt;/span&gt;
  &lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Rule&lt;/span&gt;
      &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Host(`traefik.&amp;lt;YOUR_DOMAIN&amp;gt;`)&lt;/span&gt; &lt;span class="c1"&gt;#4&lt;/span&gt;
      &lt;span class="na"&gt;middlewares&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dashboard-basic-auth&lt;/span&gt;
          &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;network&lt;/span&gt;
      &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TraefikService&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;api@internal&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the dashboard route.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; network/traefik-dashboard-route.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds, when Traefik react to adding a new definition, the dashboard should be available in the browser under the below URL (&lt;strong&gt;The trailing slash is mandatory&lt;/strong&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://traefik.&amp;lt;YOUR_DOMAIN&amp;gt;:9000/dashboard/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install sample apps (RabbitMQ &amp;amp; PostgreSQL)
&lt;/h2&gt;

&lt;p&gt;Now let's add some more apps on different ports to better check Traefik's capabilities.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add bitnami https://charts.bitnami.com/bitnami

helm &lt;span class="nb"&gt;install &lt;/span&gt;rabbitmq bitnami/rabbitmq &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;messaging &lt;span class="nt"&gt;-f&lt;/span&gt; helm/rabbitmq.yaml
helm &lt;span class="nb"&gt;install &lt;/span&gt;postgres bitnami/postgresql &lt;span class="nt"&gt;--create-namespace&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;database &lt;span class="nt"&gt;-f&lt;/span&gt; helm/postgres.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add Traefik routes for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Postgres port 5432&lt;/li&gt;
&lt;li&gt;AMQP RabbitMQ port 5672&lt;/li&gt;
&lt;li&gt;HTTPS RabbitMQ management panel
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; network/postgres-route.yaml
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; network/rabbitmq-route.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List all Traefik routes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get ingressroute,ingressroutetcps &lt;span class="nt"&gt;--all-namespaces&lt;/span&gt;

&lt;span class="nt"&gt;-------------------------&lt;/span&gt;
NAMESPACE   NAME                                          
messaging   ingressroute.traefik.containo.us/rabbitmq-http
network     ingressroute.traefik.containo.us/dashboard     

NAMESPACE   NAME                                            
database    ingressroutetcp.traefik.containo.us/postgres     
messaging   ingressroutetcp.traefik.containo.us/rabbitmq-amqp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see two HTTP routes (Traefik dashboard, RabbitMQ panel) and two TCP routes (PostgreSQL and AQMP RabbitMQ). This state should be reflected in the Traefik panel. I encourage you to explore it because it is really rich in information and intuitive.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fyTY-XPM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzmbbzrh26qm2l7wkune.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fyTY-XPM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rzmbbzrh26qm2l7wkune.png" alt="Traefik dashboard" width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final test will be to check if routing to test applications works correctly.&lt;/p&gt;

&lt;p&gt;To check PostgreSQL try to connect with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;psql -h postgres.&amp;lt;YOUR_DOMAIN&amp;gt; -p 5432 -U root&lt;/span&gt;
&lt;span class="c1"&gt;# pass 'root' password&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To check RabbitMQ AMQP connectivity execute my python script. Update URL in &lt;code&gt;utils/rabbitmq_ampq_test.py&lt;/code&gt; before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;python3 utils/rabbitmq_ampq_test.py&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RabbitMQ admin panel should be accessible on:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;https://rabbitmq.&amp;lt;YOUR_DOMAIN&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;As you can see, working with Traefik can be simple. In our lab we needed a single AWS NLB to expose several applications from different Kubernetes namespaces and on different ports. It’s a good benchmark of simplicity. This sort of approach has also limitations, but it’s a different topic. Be aware that other are other solutions in this Ingress segment such as &lt;a href="https://github.com/kubernetes/ingress-nginx"&gt;Nginx Ingress Controller&lt;/a&gt; and &lt;a href="https://github.com/haproxytech/kubernetes-ingress"&gt;HAProxy Ingress Controller&lt;/a&gt;. As always, I encourage you to experiment and form your own opinion. However, I hope I have helped you get through the basic setup more conveniently.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>eks</category>
      <category>traefik</category>
    </item>
    <item>
      <title>API Caching with ElastiCache Redis &amp; AWS Lambda</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Wed, 08 Mar 2023 22:49:29 +0000</pubDate>
      <link>https://dev.to/luafanti/api-caching-with-elasticache-redis-aws-lambda-82c</link>
      <guid>https://dev.to/luafanti/api-caching-with-elasticache-redis-aws-lambda-82c</guid>
      <description>&lt;p&gt;In this article, we will not only explore the benefits of using &lt;code&gt;Redis&lt;/code&gt; as a caching solution for a REST API, but I’ll also provide a practical example of how to implement Redis caching in a serverless environment using &lt;code&gt;AWS Lambda&lt;/code&gt; with &lt;code&gt;TypeScript&lt;/code&gt;. I’ll demonstrate how to call an external API and cache response using &lt;code&gt;ElastiCache Redis&lt;/code&gt;, resulting in faster response times and improved reliability. By following my example, you will learn how to integrate Redis caching into your own serverless applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code example
&lt;/h2&gt;

&lt;p&gt;If you want to directly jump to the code sample and play on a living organism, the repo is available on my Github right below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/luafanti/elasticache-redis-and-lambda" rel="noopener noreferrer"&gt;https://github.com/luafanti/elasticache-redis-and-lambda&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment Overview
&lt;/h3&gt;

&lt;p&gt;The infrastructure in demo is provisioned using the CloudFormation template. The stack created with the default parameters will provide the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with single public &amp;amp; private subnet, and other base VPC components&lt;/li&gt;
&lt;li&gt;NAT Gateway&lt;/li&gt;
&lt;li&gt;ElastiCache Redis&lt;/li&gt;
&lt;li&gt;TypeScript Lambda &amp;amp; HTTP API Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;**Please note that stack includes components (NAT &amp;amp; Redis) that will incur hourly costs even if you have AWS Free Tier.&lt;/p&gt;

&lt;p&gt;You can also create a stack in &lt;code&gt;MultiAZ&lt;/code&gt; mode. Then the stack will create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC with two public and two private subnets&lt;/li&gt;
&lt;li&gt;2 NAT Gateway, one per private subnet&lt;/li&gt;
&lt;li&gt;ElastiCache Redis in Multi-AZ mode - one master and one replica instance&lt;/li&gt;
&lt;li&gt;this same TypeScript Lambda&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;In Multi-AZ mode costs will be doubled due to two instances of NAT and Redis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install is simple with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install 
&lt;/span&gt;sam build
sam deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To remove whole stack&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sam delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important points to clarify&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda needs to be deployed inside VPC if want to connect managed Redis. ElastiCache is a service designed to be used internally in VPC.&lt;/li&gt;
&lt;li&gt;NAT Gateway is required in this setup. Lambda running inside a VPC is never assigned a public IP address, so it can't connect to anything outside the VPC - in this case our external API. NAT Gateway resolves this problem.&lt;/li&gt;
&lt;li&gt;If you want to connect to Redis cluster e.g. from local CLI, you have to setup a VPN connection or bastion host.&lt;/li&gt;
&lt;li&gt;Top-level await isn’t supported in this Lambda sample. It’s because the actual setup uses &lt;code&gt;CommonJS&lt;/code&gt; packaging. To enable this feature it needs to configure esbuild to output modules files with &lt;code&gt;.mjs&lt;/code&gt; extension.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  In-Depth
&lt;/h2&gt;

&lt;p&gt;REST APIs have become an integral part of our systems. &lt;code&gt;GraphQL&lt;/code&gt; and &lt;code&gt;gRPC&lt;/code&gt; often in some aspects can be good replacements for old-fashioned REST, but personally, I still can't imagine forcefully avoiding this kind of API. Anyway, REST is ubiquitous and in our systems, we often integrate our services with that way. Sometimes it's an internal API, sometimes it's an external API. In the latter case, it’s more difficult because we have no control over the performance and availability of this API.&lt;/p&gt;

&lt;p&gt;The long response time of external API automatically increases the overall time of request handling in our service. The unavailability of external API, even more, impacts our app and required special handling. The antidote to all this evil may be an additional layer of caching... and this is where Redis comes into the game.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Redis as a cache?
&lt;/h3&gt;

&lt;p&gt;The main reason for Redis is its high performance. Redis is considered one of the faster key-value databases. There are several reasons behind that efficiency:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM-based data store.&lt;/strong&gt; RAM access is several orders of magnitude faster than HDD or even SSD access. Not to mention access over API, which is burdened with the highest latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w2qjot2auzg842ixzrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w2qjot2auzg842ixzrd.png" alt="Redis data access pyramid"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Efficient data structures&lt;/strong&gt;. Redis offers various data structures such as List, Sets, Hashes, Bitmaps, etc. All types are implemented in C and to allocate memory use custom wrapper for malloc called &lt;a href="https://github.com/antirez/redis/blob/unstable/src/zmalloc.h" rel="noopener noreferrer"&gt;zmalloc&lt;/a&gt;. This allows Redis to choose different alloc libraries depending on the needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven architecture.&lt;/strong&gt; Redis uses a reactor design pattern to multiplex I/O to handle thousands of incoming requests at the same time using just a single thread.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The additional reason in the case of AWS is that the Redis database is available here as a managed service - &lt;a href="https://aws.amazon.com/elasticache/redis/" rel="noopener noreferrer"&gt;ElastiCache Redis&lt;/a&gt;. It simplifies configuration and maintenance, and shifts some of the responsibility to the provider.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step caching flow
&lt;/h2&gt;

&lt;p&gt;Flow is very simple… &lt;code&gt;ProxyLambda&lt;/code&gt; first checks if response from External API exists in cache. As a cache key in this simple example, I just use the full request path also with query params. In complex API I can recommend a more advanced strategy for key generation. If response object exists in Redis cache, Lambda returns it directly. Otherwise, Lambda calls External API as before but additionally saves this response to the Redis cache. Thanks to this, a subsequent request to &lt;code&gt;ProxyLambda&lt;/code&gt; for this same resource, will be returned from cache instead calling External API.&lt;/p&gt;

&lt;p&gt;Basically the two diagrams below should explain it all&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0weieuq7zg9x0v9vskm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0weieuq7zg9x0v9vskm6.png" alt="API caching with Redis - component diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feegrfyuyjh1sfzjqr11u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feegrfyuyjh1sfzjqr11u.png" alt="API caching with Redis -  sequence diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is still the question about invalidating records in cache. Here the strategy must fit requirements. In this demo, objects in the cache are eternal. One of solution would be to hardcode expiration time on save (&lt;code&gt;redis.setEx(cacheKey, 86400, apiResponse&lt;/code&gt;). A more elegant way would be to create dedicated invalidation Lambda, which will remove objects from the cache when receiving an event that in External API some resource has been removed or modified.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pros and Cons of the Solution
&lt;/h3&gt;

&lt;p&gt;🟢 Better performance. Responses from Redis cache can be much faster than from External API. Latency is also stable and doesn't depend on the current API load.&lt;/p&gt;

&lt;p&gt;🟢 Improved reliability.  Our faced API can respond even if External API is down. &lt;/p&gt;

&lt;p&gt;🟢 Less load on the External API as less requests reaches it.&lt;/p&gt;

&lt;p&gt;🟡 Additional work on management, and maintenance of ElastiCache. &lt;/p&gt;

&lt;p&gt;🟡 Additional cost of ElastiCache cluster.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>elasticache</category>
      <category>redis</category>
      <category>caching</category>
    </item>
    <item>
      <title>Cold starts with SnapStart for Java Frameworks (Spring Boot vs Quarkus vs Micronaut)</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Fri, 17 Feb 2023 11:11:58 +0000</pubDate>
      <link>https://dev.to/luafanti/cold-starts-with-snapstart-for-java-frameworks-spring-boot-vs-quarkus-vs-micronaut-3hbf</link>
      <guid>https://dev.to/luafanti/cold-starts-with-snapstart-for-java-frameworks-spring-boot-vs-quarkus-vs-micronaut-3hbf</guid>
      <description>&lt;p&gt;At the last &lt;strong&gt;re:Invent&lt;/strong&gt; 2022 AWS gave a lot of attention to the term Serverless. The &lt;a href="https://youtu.be/RfvL_423a-I"&gt;main Keynote&lt;/a&gt; of AWS CTO &lt;strong&gt;Dr. Werner Vogels&lt;/strong&gt; was very saturated with asynchronous approach and event-driven architecture. Also, new announcements were very closely related to these topics e.g. Step Functions &amp;amp; Event Bridge improvements.&lt;/p&gt;

&lt;p&gt;Apart from all these trendy announcements, AWS also shows that it is trying to invest in technologies very cross-sectionally. It doesn’t cut itself off from Java technology, among others. This is evidenced by one of the largest Serverless announcements -  &lt;a href="https://aws.amazon.com/blogs/aws/new-accelerate-your-lambda-functions-with-lambda-snapstart/"&gt;SnapStart&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Due to the fact that I am associated with Java technology from the beginning of my professional career, I was very excited about this announcement. I was aware that so far JVM-based languages are not the main players for Lambda functions. Mainly because of its long cold start.&lt;br&gt;
I haven't checked in a long time to see if anything has improved. So I decided that the release of SnapStart is a good time to evaluate cold starts for Java and its frameworks - Spring Boot, Quarkus, and Micronaut.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is SnapStart?
&lt;/h2&gt;

&lt;p&gt;Typically, AWS Lambda set up a new execution environment each time a function is first invoked or when the function is scaled up to handle increased traffic. As you probably know applications written in Java before accepting traffic need some time to initialize and start-up. This is the nature of JVM. SnapStart has been created to address this issue.&lt;/p&gt;

&lt;p&gt;When SnapStart is enabled Lambda ahead of time creates a snapshot of initialized execution environment (memory and disk state) and persists it in the cache for low-latency access. This eliminates the need for the function to spend time on initialization (when the event came), as Lambda can quickly resume from the persisted snapshot instead.&lt;/p&gt;

&lt;p&gt;I said "ahead of time" because Snapshot creation happens when you publish a function version and SnapStart works only for the published version of the Lambda function (can’t just use &lt;code&gt;$LATEST&lt;/code&gt;).&lt;br&gt;
Normally &lt;code&gt;Init&lt;/code&gt; phase is the stage during Lambda performs multiple tasks like preparing the runtime container, downloading function code, initializing it, and so on… and then moving to the next phase. &lt;code&gt;Init&lt;/code&gt; phase is Limited to &lt;strong&gt;10 seconds&lt;/strong&gt;. When SnapStart is activated, the &lt;code&gt;Init&lt;/code&gt; phase happens earlier - yes, yes… when you publish a function version. In this case, 10-second timeout doesn't apply. Snapshot initialization can take up to &lt;strong&gt;15 minutes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--O0DOZyIL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m71hyt29i11k6alak60z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--O0DOZyIL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m71hyt29i11k6alak60z.png" alt="AWS Lambda SnapStart lifecycle diagram " width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Someone more curious might ask &lt;em&gt;how the snapshot of the initiated function is possible to create?&lt;/em&gt; The answer is hidden behind a few magic terms.&lt;br&gt;
First of all - &lt;a href="https://wiki.openjdk.org/display/crac"&gt;CRaC&lt;/a&gt; (Coordinated Restore at Checkpoint) - open source project led by &lt;strong&gt;OpenJDK&lt;/strong&gt;. It's focused on creating Java API responsible for saving and restoring the state of a JVM, including the currently running application - so-called &lt;code&gt;checkpointing&lt;/code&gt;.  CRaC based on next key project - &lt;a href="https://criu.org/Main_Page"&gt;CRIU (Checkpoint/Restore in Userspace)&lt;/a&gt; - that allows application running on Linux system to be paused and restarted at some point later in time, potentially on a different machine. The last key piece of this SnapStart puzzle is &lt;a href="https://firecracker-microvm.github.io/"&gt;Firecracker&lt;/a&gt; and his &lt;strong&gt;microVMs&lt;/strong&gt;. SnapStart uses micro Virtual Machine (microVM) snapshots to checkpoint and restore full applications. Interestingly, it turns out that Amazon engineers from Firecracker and Corretto (AWS JDK distribution) teams were involved in CRaC project at the early stage.&lt;/p&gt;

&lt;p&gt;This means that AWS has long since taken the first steps to address the cold start problem for Java. It confirms my thesis that AWS invests in breakthrough technologies and knows that Java and JVM are still important in the IT market.&lt;/p&gt;

&lt;p&gt;But unfortunately, not everything is so beautiful. SnapStart and the methods on which it is based introduced some  challenges -  Due to the fact that it operates on a memory dump.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Randomness&lt;/strong&gt; - all results of &lt;code&gt;java.util.Random&lt;/code&gt; operations can be the same so use &lt;code&gt;java.security.SecureRandom&lt;/code&gt; instead because Amazon handles it in Corretto. But if one of your dependencies uses the first one, you still be in trouble.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connections keeping&lt;/strong&gt; -  state of connections that your function establishes during the initialization phase isn't guaranteed when Lambda resumes from a snapshot. In most cases, network connections that an AWS SDK establishes automatically resume but for other connections you need to handle it on your own.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stale credentials&lt;/strong&gt; - the created snapshot also caches things like injected secrets and passwords (of course the whole snapshot is encrypted). Passwords can be rotated automatically. However, the snapshot can be used for a long period of time and not know anything about the password change. Our snapshot is immutable so will continue to use stale credentials. This applies not only to secrets. You need to protect yourself against such a case for any frequently changed data that you pull from an external sources into function memory. But don’t worry, you have tools to handle it e.g. post-snapshot hook.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Other lacks of support for AWS Lambda&lt;/strong&gt; -  provisioned concurrency, arm64 architecture, EFS, larger ephemeral storage (max 512 MB)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm curious if SnapStart will remain a functionality reserved only for Java runtime or maybe it will turn out that AWS prepares a similar trick for other runtimes. Presumably, other runtimes can't use snapshotting in quite the same way as JVM and others wouldn’t make sense to even attempt eg. Golang. But, I would like to see a SnapStart kind of solution for Node, which also can have hiccups in cold start ... especially with a large number of dependencies.&lt;/p&gt;




&lt;h2&gt;
  
  
  Spring Boot vs Quarkus vs Micronaut - introduction of competitors
&lt;/h2&gt;

&lt;p&gt;Before we get to the merits, a brief introduction of competitors. Overall, all 3 frameworks are similar in terms of functionality and are suitable for building web apps and microservices, but they have different design goals and trade-offs.&lt;/p&gt;

&lt;p&gt;BTW all sources and codes, as always, can be found on my &lt;a href="https://github.com/luafanti/serverless-java-frameworks"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Spring Boot
&lt;/h3&gt;

&lt;p&gt;Spring Boot is the most widely adopted and well-established of the three frameworks. Pivotal product has also the largest and most active community. It has been around for over a decade. It provides a wide range of features and is highly configurable, making it a good choice for large and complex applications. However, Spring Boot is more resource-intensive than Quarkus and Micronaut. Spring Boot uses a traditional, Just-in-Time (JIT) compilation approach, which can result in longer startup times compared to Quarkus and Micronaut, which use Ahead-of-Time (AOT) compilation. AOT compilation pre-compiles the code at build time, resulting in faster startup times and smaller memory footprint. Also, runtime dependency injection adds some overhead and complexity to spring-based workloads.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quarkus
&lt;/h3&gt;

&lt;p&gt;Quarkus is a relatively new framework that aims to provide the same functionality as Spring Boot, but with a smaller footprint and faster startup time. A project initiated by RedHat was created to be used for native compilation for GraalVM. It aims to be effective platform for serverless, cloud, and Kubernetes environments. Quarkus uses Ahead-of-Time (AOT) compilation to reduce startup time and memory usage. The community strongly appreciates the speed and convenience of development. More and more projects boast of migrating microservice workloads from Spring Boot to Quarkus. At the end I can add that the documentation is really good.&lt;/p&gt;

&lt;h3&gt;
  
  
  Micronaut
&lt;/h3&gt;

&lt;p&gt;Micronaut like Quarkus is a relatively newer framework, but it has been gaining popularity in recent years. It has a very spring-inspired programming model. It also uses Reactor (instead of Vert.x that Quarkus use). So if you are coming from a Spring world, in Micronaut you will find many similar patterns, techniques e.g. Mono and Flux from Reactor core. At the same time Micronaut aims to avoid downsides of Spring. It minimizes using reflections and proxies and doesn't use runtime bytecode generation. The source I found says that performance is a tiny bit better with Quarkus, but it's just negligible value.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measurements and charts
&lt;/h2&gt;

&lt;p&gt;Let's move on to the main point - measurements. They were the main reason for writing this article. I wanted to measure how a cold start looks in 2023 for Java and its most popular frameworks. How much SnapStart makes things better. If SnapStart also affect warm start? How resource changes (Lambda memory) affect a cold &amp;amp; warm start performance? I hope the charts and tables below will help you answer these questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vanilla Java
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;Non SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;754.9&lt;/td&gt;
            &lt;td&gt;790.8&lt;/td&gt;
            &lt;td&gt;826.9&lt;/td&gt;
            &lt;td&gt;904&lt;/td&gt;
            &lt;td&gt;11.3&lt;/td&gt;
            &lt;td&gt;37&lt;/td&gt;
            &lt;td&gt;228.2&lt;/td&gt;
            &lt;td&gt;275.4&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;566.6&lt;/td&gt;
            &lt;td&gt;599.9&lt;/td&gt;
            &lt;td&gt;666.9&lt;/td&gt;
            &lt;td&gt;676.1&lt;/td&gt;
            &lt;td&gt;1.9&lt;/td&gt;
            &lt;td&gt;15.4&lt;/td&gt;
            &lt;td&gt;107.1&lt;/td&gt;
            &lt;td&gt;328.2&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;549.4&lt;/td&gt;
            &lt;td&gt;474.3&lt;/td&gt;
            &lt;td&gt;502.1&lt;/td&gt;
            &lt;td&gt;529.5&lt;/td&gt;
            &lt;td&gt;1.6&lt;/td&gt;
            &lt;td&gt;8.4&lt;/td&gt;
            &lt;td&gt;50.9&lt;/td&gt;
            &lt;td&gt;97.3&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;426.1&lt;/td&gt;
            &lt;td&gt;445.5&lt;/td&gt;
            &lt;td&gt;466.1&lt;/td&gt;
            &lt;td&gt;489.7&lt;/td&gt;
            &lt;td&gt;1.6&lt;/td&gt;
            &lt;td&gt;3.3&lt;/td&gt;
            &lt;td&gt;20.6&lt;/td&gt;
            &lt;td&gt;25.5&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;301.7&lt;/td&gt;
            &lt;td&gt;327.7&lt;/td&gt;
            &lt;td&gt;415.9&lt;/td&gt;
            &lt;td&gt;450.4&lt;/td&gt;
            &lt;td&gt;1.5&lt;/td&gt;
            &lt;td&gt;2.5&lt;/td&gt;
            &lt;td&gt;13.2&lt;/td&gt;
            &lt;td&gt;21.1&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;705.3&lt;/td&gt;
            &lt;td&gt;773.3&lt;/td&gt;
            &lt;td&gt;817.8&lt;/td&gt;
            &lt;td&gt;896.2&lt;/td&gt;
            &lt;td&gt;17.4&lt;/td&gt;
            &lt;td&gt;52.6&lt;/td&gt;
            &lt;td&gt;268&lt;/td&gt;
            &lt;td&gt;479.7&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;401.6&lt;/td&gt;
            &lt;td&gt;447.9&lt;/td&gt;
            &lt;td&gt;473.2&lt;/td&gt;
            &lt;td&gt;536.6&lt;/td&gt;
            &lt;td&gt;7.7&lt;/td&gt;
            &lt;td&gt;20.3&lt;/td&gt;
            &lt;td&gt;120.2&lt;/td&gt;
            &lt;td&gt;214.3&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;231.9&lt;/td&gt;
            &lt;td&gt;261.9&lt;/td&gt;
            &lt;td&gt;311.7&lt;/td&gt;
            &lt;td&gt;1174.6&lt;/td&gt;
            &lt;td&gt;1.7&lt;/td&gt;
            &lt;td&gt;9.7&lt;/td&gt;
            &lt;td&gt;53.5&lt;/td&gt;
            &lt;td&gt;135.6&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;203.9&lt;/td&gt;
            &lt;td&gt;231.6&lt;/td&gt;
            &lt;td&gt;367.3&lt;/td&gt;
            &lt;td&gt;399.1&lt;/td&gt;
            &lt;td&gt;1.6&lt;/td&gt;
            &lt;td&gt;3.7&lt;/td&gt;
            &lt;td&gt;24.8&lt;/td&gt;
            &lt;td&gt;53.4&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;241.5&lt;/td&gt;
            &lt;td&gt;353.4&lt;/td&gt;
            &lt;td&gt;484.1&lt;/td&gt;
            &lt;td&gt;501.4&lt;/td&gt;
            &lt;td&gt;1.5&lt;/td&gt;
            &lt;td&gt;2.6&lt;/td&gt;
            &lt;td&gt;14&lt;/td&gt;
            &lt;td&gt;25.7&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dBj78PYc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ikfu637xkip7pdrjcwto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dBj78PYc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ikfu637xkip7pdrjcwto.png" alt="Java Cold start median - SnapStart comparison" width="610" height="374"&gt;&lt;/a&gt;&lt;/p&gt;
Java Cold start median



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--f5R3Upap--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yyhjiixn5tuo6wh52ay4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--f5R3Upap--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yyhjiixn5tuo6wh52ay4.png" alt="Java Warm start median - SnapStart comparison" width="610" height="376"&gt;&lt;/a&gt;&lt;/p&gt;
Java Warm start median






&lt;h3&gt;
  
  
  Spring Boot
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;Non SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;100%&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;10,5%&lt;/td&gt;
            &lt;td&gt;5584.1&lt;/td&gt;
            &lt;td&gt;6867.7&lt;/td&gt;
            &lt;td&gt;7119.3&lt;/td&gt;
            &lt;td&gt;7157.4&lt;/td&gt;
            &lt;td&gt;28.3&lt;/td&gt;
            &lt;td&gt;1135.7&lt;/td&gt;
            &lt;td&gt;3582.5&lt;/td&gt;
            &lt;td&gt;3808.8&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;3515.4&lt;/td&gt;
            &lt;td&gt;3647.8&lt;/td&gt;
            &lt;td&gt;3725.2&lt;/td&gt;
            &lt;td&gt;3762.8&lt;/td&gt;
            &lt;td&gt;12.1&lt;/td&gt;
            &lt;td&gt;20.2&lt;/td&gt;
            &lt;td&gt;52.9&lt;/td&gt;
            &lt;td&gt;180.6&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;3396.6&lt;/td&gt;
            &lt;td&gt;3512.3&lt;/td&gt;
            &lt;td&gt;3599.6&lt;/td&gt;
            &lt;td&gt;3599.6&lt;/td&gt;
            &lt;td&gt;3.9&lt;/td&gt;
            &lt;td&gt;9.2&lt;/td&gt;
            &lt;td&gt;18.5&lt;/td&gt;
            &lt;td&gt;94.7&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;2366.4&lt;/td&gt;
            &lt;td&gt;2525.2&lt;/td&gt;
            &lt;td&gt;3127.5&lt;/td&gt;
            &lt;td&gt;3191.1&lt;/td&gt;
            &lt;td&gt;3.4&lt;/td&gt;
            &lt;td&gt;5&lt;/td&gt;
            &lt;td&gt;10.6&lt;/td&gt;
            &lt;td&gt;33.4&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;100%&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;60.3%&lt;/td&gt;
            &lt;td&gt;3920&lt;/td&gt;
            &lt;td&gt;5027.8&lt;/td&gt;
            &lt;td&gt;5149.8&lt;/td&gt;
            &lt;td&gt;5173.7&lt;/td&gt;
            &lt;td&gt;2399.7&lt;/td&gt;
            &lt;td&gt;3717.8&lt;/td&gt;
            &lt;td&gt;3931.8&lt;/td&gt;
            &lt;td&gt;4141.4&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;515.3&lt;/td&gt;
            &lt;td&gt;554.2&lt;/td&gt;
            &lt;td&gt;598.7&lt;/td&gt;
            &lt;td&gt;611.4&lt;/td&gt;
            &lt;td&gt;5.6&lt;/td&gt;
            &lt;td&gt;18.6&lt;/td&gt;
            &lt;td&gt;37.1&lt;/td&gt;
            &lt;td&gt;54.3&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;347.3&lt;/td&gt;
            &lt;td&gt;381.1&lt;/td&gt;
            &lt;td&gt;451.6&lt;/td&gt;
            &lt;td&gt;1270&lt;/td&gt;
            &lt;td&gt;3.8&lt;/td&gt;
            &lt;td&gt;9.1&lt;/td&gt;
            &lt;td&gt;17.1&lt;/td&gt;
            &lt;td&gt;32.3&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;350.4&lt;/td&gt;
            &lt;td&gt;417.3&lt;/td&gt;
            &lt;td&gt;604.7&lt;/td&gt;
            &lt;td&gt;641&lt;/td&gt;
            &lt;td&gt;3.6&lt;/td&gt;
            &lt;td&gt;5.7&lt;/td&gt;
            &lt;td&gt;16.9&lt;/td&gt;
            &lt;td&gt;65.1&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7O_R_8US--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c27g9rtgepob7888ep16.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7O_R_8US--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c27g9rtgepob7888ep16.png" alt="Spring Boot Cold start median - SnapStart comparison" width="765" height="469"&gt;&lt;/a&gt;&lt;/p&gt;
Spring Boot Cold start median



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mCWwEU34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvo5gu9gm3lp8ikx5t2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mCWwEU34--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvo5gu9gm3lp8ikx5t2y.png" alt="Spring Boot Warm start median - SnapStart comparison" width="764" height="469"&gt;&lt;/a&gt;&lt;/p&gt;
Spring Boot Warm start median






&lt;h3&gt;
  
  
  Quarkus
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;Non SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;100%&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;3452.7&lt;/td&gt;
            &lt;td&gt;3543.6&lt;/td&gt;
            &lt;td&gt;3732.7&lt;/td&gt;
            &lt;td&gt;3757.3&lt;/td&gt;
            &lt;td&gt;50.5&lt;/td&gt;
            &lt;td&gt;73.1&lt;/td&gt;
            &lt;td&gt;213.8&lt;/td&gt;
            &lt;td&gt;317.2&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;2738.2&lt;/td&gt;
            &lt;td&gt;2818.7&lt;/td&gt;
            &lt;td&gt;2890&lt;/td&gt;
            &lt;td&gt;2899.4&lt;/td&gt;
            &lt;td&gt;16.5&lt;/td&gt;
            &lt;td&gt;34&lt;/td&gt;
            &lt;td&gt;93.1&lt;/td&gt;
            &lt;td&gt;189.5&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;2305.7&lt;/td&gt;
            &lt;td&gt;2387.7&lt;/td&gt;
            &lt;td&gt;2512.6&lt;/td&gt;
            &lt;td&gt;4079.8&lt;/td&gt;
            &lt;td&gt;5.8&lt;/td&gt;
            &lt;td&gt;11.4&lt;/td&gt;
            &lt;td&gt;14.5&lt;/td&gt;
            &lt;td&gt;70.2&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;1676.2&lt;/td&gt;
            &lt;td&gt;1823&lt;/td&gt;
            &lt;td&gt;2028.8&lt;/td&gt;
            &lt;td&gt;2046.3&lt;/td&gt;
            &lt;td&gt;4.4&lt;/td&gt;
            &lt;td&gt;7.7&lt;/td&gt;
            &lt;td&gt;18.4&lt;/td&gt;
            &lt;td&gt;42.6&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;100%&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;1918.5&lt;/td&gt;
            &lt;td&gt;1977.8&lt;/td&gt;
            &lt;td&gt;2034.1&lt;/td&gt;
            &lt;td&gt;2063.4&lt;/td&gt;
            &lt;td&gt;48.8&lt;/td&gt;
            &lt;td&gt;63.5&lt;/td&gt;
            &lt;td&gt;168.2&lt;/td&gt;
            &lt;td&gt;264.1&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;1059.1&lt;/td&gt;
            &lt;td&gt;1115.8&lt;/td&gt;
            &lt;td&gt;1144.8&lt;/td&gt;
            &lt;td&gt;1148.2&lt;/td&gt;
            &lt;td&gt;16&lt;/td&gt;
            &lt;td&gt;33.7&lt;/td&gt;
            &lt;td&gt;94.6&lt;/td&gt;
            &lt;td&gt;172.5&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;583.38&lt;/td&gt;
            &lt;td&gt;622&lt;/td&gt;
            &lt;td&gt;690.4&lt;/td&gt;
            &lt;td&gt;711.2&lt;/td&gt;
            &lt;td&gt;5.5&lt;/td&gt;
            &lt;td&gt;13.6&lt;/td&gt;
            &lt;td&gt;37.9&lt;/td&gt;
            &lt;td&gt;64.3&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;455.7&lt;/td&gt;
            &lt;td&gt;498.6&lt;/td&gt;
            &lt;td&gt;556&lt;/td&gt;
            &lt;td&gt;566.1&lt;/td&gt;
            &lt;td&gt;4.3&lt;/td&gt;
            &lt;td&gt;7.4&lt;/td&gt;
            &lt;td&gt;20.2&lt;/td&gt;
            &lt;td&gt;57.2&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--__dHlIHX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zamzdl61smrqqxtshrtm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--__dHlIHX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zamzdl61smrqqxtshrtm.png" alt="Quarkus Cold start median - SnapStart comparison" width="767" height="472"&gt;&lt;/a&gt;&lt;/p&gt;
Quarkus Cold start median



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3gn8w-c_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uexirpqus9a4s7p2m4uc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3gn8w-c_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uexirpqus9a4s7p2m4uc.png" alt="Quarkus Warm start median - SnapStart comparison" width="764" height="472"&gt;&lt;/a&gt;&lt;/p&gt;
Quarkus Warm start median






&lt;h3&gt;
  
  
  Micronaut
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;Non SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;100%&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;3758.9&lt;/td&gt;
            &lt;td&gt;3912.2&lt;/td&gt;
            &lt;td&gt;4362.3&lt;/td&gt;
            &lt;td&gt;4262.7&lt;/td&gt;
            &lt;td&gt;29.1&lt;/td&gt;
            &lt;td&gt;46.8&lt;/td&gt;
            &lt;td&gt;174.1&lt;/td&gt;
            &lt;td&gt;315.7&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;3391.1&lt;/td&gt;
            &lt;td&gt;3626&lt;/td&gt;
            &lt;td&gt;3916.1&lt;/td&gt;
            &lt;td&gt;3941.4&lt;/td&gt;
            &lt;td&gt;9.3&lt;/td&gt;
            &lt;td&gt;18.9&lt;/td&gt;
            &lt;td&gt;55&lt;/td&gt;
            &lt;td&gt;151.1&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;3146.2&lt;/td&gt;
            &lt;td&gt;3357.5&lt;/td&gt;
            &lt;td&gt;3680.2&lt;/td&gt;
            &lt;td&gt;3723.9&lt;/td&gt;
            &lt;td&gt;3.6&lt;/td&gt;
            &lt;td&gt;9.6&lt;/td&gt;
            &lt;td&gt;25.7&lt;/td&gt;
            &lt;td&gt;82&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;2517.6&lt;/td&gt;
            &lt;td&gt;2628.2&lt;/td&gt;
            &lt;td&gt;2738.1&lt;/td&gt;
            &lt;td&gt;2881.9&lt;/td&gt;
            &lt;td&gt;3.2&lt;/td&gt;
            &lt;td&gt;4.6&lt;/td&gt;
            &lt;td&gt;12.7&lt;/td&gt;
            &lt;td&gt;45.9&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
        &lt;tr&gt;
            &lt;th colspan="2"&gt;SnapStart&lt;/th&gt;
            &lt;th colspan="4"&gt;Cold Start (ms)&lt;/th&gt;
            &lt;th colspan="4"&gt;Warm Start (ms)&lt;/th&gt;           
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt; memory (MB)&lt;/th&gt;
            &lt;th&gt;error rate&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
            &lt;th&gt;p50&lt;/th&gt;
            &lt;th&gt;p90&lt;/th&gt;
            &lt;th&gt;p99&lt;/th&gt;
            &lt;th&gt;max&lt;/th&gt;
        &lt;/tr&gt;        
        &lt;tr&gt;
            &lt;th&gt;128&lt;/th&gt;
            &lt;td&gt;100%&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
            &lt;td&gt;N/A&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;256&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;1725.4&lt;/td&gt;
            &lt;td&gt;1814.6&lt;/td&gt;
            &lt;td&gt;2257.8&lt;/td&gt;
            &lt;td&gt;2289.8&lt;/td&gt;
            &lt;td&gt;29.1&lt;/td&gt;
            &lt;td&gt;44.2&lt;/td&gt;
            &lt;td&gt;175.2&lt;/td&gt;
            &lt;td&gt;231.8&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;512&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;677&lt;/td&gt;
            &lt;td&gt;729.7&lt;/td&gt;
            &lt;td&gt;798.4&lt;/td&gt;
            &lt;td&gt;809.1&lt;/td&gt;
            &lt;td&gt;11.4&lt;/td&gt;
            &lt;td&gt;20.5&lt;/td&gt;
            &lt;td&gt;65.8&lt;/td&gt;
            &lt;td&gt;91&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;1024&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;468.6&lt;/td&gt;
            &lt;td&gt;518.9&lt;/td&gt;
            &lt;td&gt;626.3&lt;/td&gt;
            &lt;td&gt;1373.1&lt;/td&gt;
            &lt;td&gt;3.7&lt;/td&gt;
            &lt;td&gt;8.6&lt;/td&gt;
            &lt;td&gt;25.5&lt;/td&gt;
            &lt;td&gt;97.7&lt;/td&gt;
        &lt;/tr&gt;
        &lt;tr&gt;
            &lt;th&gt;4096&lt;/th&gt;
            &lt;td&gt;0%&lt;/td&gt;
            &lt;td&gt;388.8&lt;/td&gt;
            &lt;td&gt;439.2&lt;/td&gt;
            &lt;td&gt;562.1&lt;/td&gt;
            &lt;td&gt;594.3&lt;/td&gt;
            &lt;td&gt;3.1&lt;/td&gt;
            &lt;td&gt;4.9&lt;/td&gt;
            &lt;td&gt;16.7&lt;/td&gt;
            &lt;td&gt;67.1&lt;/td&gt;
        &lt;/tr&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T6OIx5tS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2njcsg82eqr3rawl80m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T6OIx5tS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2njcsg82eqr3rawl80m.png" alt="Micronaut Cold start median - SnapStart comparison" width="800" height="485"&gt;&lt;/a&gt;&lt;/p&gt;
Micronaut Cold start median



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---eJtMN8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3aubj6i5tztyqa1l2ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---eJtMN8k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3aubj6i5tztyqa1l2ew.png" alt="Micronaut Warm start median - SnapStart comparison" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;
Micronaut Warm start median






&lt;p&gt;&lt;strong&gt;Spring Boot vs Quarkus vs Micronaut vs Java - cold start charts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2NQCwaLz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oclht5xxkr0wtx24m1wx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2NQCwaLz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oclht5xxkr0wtx24m1wx.png" alt="Spring Boot, Quarkus, Micronaut, Java - cold start without SnapStart median" width="660" height="407"&gt;&lt;/a&gt;&lt;/p&gt;
Spring Boot, Quarkus, Micronaut, Java - cold start without SnapStart median



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4SejXZKw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9kghl0rfsrpvs160hpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4SejXZKw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j9kghl0rfsrpvs160hpi.png" alt="Spring Boot, Quarkus, Micronaut, Java - cold start with SnapStart median" width="662" height="406"&gt;&lt;/a&gt;&lt;/p&gt;
Spring Boot, Quarkus, Micronaut, Java - cold start with SnapStart enabled median






&lt;h3&gt;
  
  
  General conclusions and observations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;only pure Java Lambda can be run with 128 MB configuration and handle traffic. Frameworks fail with &lt;code&gt;java.lang.OutOfMemoryError&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Spring Boot framework fails part of requests also with 256 MB configuration. This spoils chart presentation for 256 MB&lt;/li&gt;
&lt;li&gt;function package size is the largest for Spring Boot (13,7 MB) and the smallest for Micronaut (11,8 MB) apart from pure Java that size is around 1MB&lt;/li&gt;
&lt;li&gt;the biggest difference in performance can be observed when upgrading memory from 256 MB to 512 MB. This applies to all frameworks, cold/warm start&lt;/li&gt;
&lt;li&gt;enabling SnapStart brings the greatest benefit for Spring Boot - almost x10 shorter cold start in few configurations. For Quarkus it is average x4 short and for Micronaut +/- x6&lt;/li&gt;
&lt;li&gt;for all frameworks with SnapStart enabled cold start comes close to pure Java cold start which is a fantastic result&lt;/li&gt;
&lt;li&gt;looking at the median graph, you can see that Quarkus had the shortest cold starts without SnapStart. On the other hand, with SnapStart enabled it performs worst&lt;/li&gt;
&lt;li&gt;SnapStart doesn't significantly affect warm start. It's hard to say if it has at all&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>serverless</category>
      <category>java</category>
      <category>quarkus</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Spring Boot logging with Loki, Promtail, and Grafana (Loki stack)</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Fri, 06 Jan 2023 11:08:08 +0000</pubDate>
      <link>https://dev.to/luafanti/spring-boot-logging-with-loki-promtail-and-grafana-loki-stack-aep</link>
      <guid>https://dev.to/luafanti/spring-boot-logging-with-loki-promtail-and-grafana-loki-stack-aep</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/luafanti/spring-boot-monitoring-with-prometheus-operator-40g1"&gt;the previous article&lt;/a&gt;, I presented how to setup a monitoring stack using Prometheus Operator and integrate it with a sample Spring Boot app. This post will be analogous to the previous one but will be about another important topic - logs.&lt;/p&gt;

&lt;p&gt;We will use Spring Boot application in demo. However, you will be able to configure any other app following this article. The only thing you need to ensure is to configure your app to produce logs in JSON format. &lt;/p&gt;

&lt;h2&gt;
  
  
  Configure Spring Boot to produce JSON logs
&lt;/h2&gt;

&lt;p&gt;This is a &lt;a href="https://github.com/luafanti/spring-boot-debug-app" rel="noopener noreferrer"&gt;GitHub link&lt;/a&gt; to my demo app. It’s simple Spring Boot web app used to debugging various stuff. There are many ways to configure JSON logging in Spring Boot. I decided to use &lt;a href="https://logback.qos.ch/" rel="noopener noreferrer"&gt;Logback&lt;/a&gt; because it is easy to configure and one of the most widely used logging library in the Java Community. To enable JSON logging we need to add below dependencies. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

implementation&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"ch.qos.logback.contrib:logback-json-classic:0.1.5"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
implementation&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"ch.qos.logback.contrib:logback-jackson:0.1.5"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
implementation&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"org.codehaus.janino:janino:3.1.9"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="http://janino-compiler.github.io/janino" rel="noopener noreferrer"&gt;Janino&lt;/a&gt; dependency additionally adds support for conditional processing in configuration file &lt;code&gt;logback.xml&lt;/code&gt;. Thanks to this, we can make our configuration parameterized. Use standard logs output when running the application locally, but enable JSON logging only when the application is running in the Kubernetes Pod by injecting the proper env variable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/main/src/main/resources/logback.xml" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can find my Logback configuration file. This is all that you have to configure in Spring Boot app to enable JSON logging. When you run applications with env &lt;code&gt;JSON_LOGS_ENABLED=true&lt;/code&gt; logs should be printed in JSON format. More information about &lt;a href="https://logback.qos.ch/manual/layouts.html" rel="noopener noreferrer"&gt;Logback Layouts&lt;/a&gt; - component responsible for formatting log messages. &lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana Loki stack
&lt;/h2&gt;

&lt;p&gt;Loki Stack consists of 3 main components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loki - log aggregation system responsible for storing the logs and processing queries.&lt;/li&gt;
&lt;li&gt;Promtail - lightweight agent responsible for gathering logs and pushing them to Loki. You can compare it to Fluentbit or Filebeat.&lt;/li&gt;
&lt;li&gt;Grafana - visualization layer responsible for querying and displaying the logs on dashboards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv995vl59y0g72toyp3ub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv995vl59y0g72toyp3ub.png" alt="Loki stack - Promtail &amp;amp; Loki &amp;amp; Grafana"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Grafana Labs in its Helm repository provides chart that can install &lt;a href="https://github.com/grafana/helm-charts/tree/main/charts/loki-stack" rel="noopener noreferrer"&gt;Loki stack&lt;/a&gt; also with other complementary tools like Logstash or Prometheus.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Quick installation with helmfile
&lt;/h3&gt;

&lt;p&gt;If you haven't used &lt;a href="https://github.com/helmfile/helmfile" rel="noopener noreferrer"&gt;helmfile&lt;/a&gt; yet, I strongly encourage you to check out this tool. In my &lt;a href="https://dev.to/luafanti/spring-boot-monitoring-with-prometheus-operator-40g1#quick-installation-with-helmfile"&gt;previous article&lt;/a&gt; about monitoring, I described why helmfile is worth using. Here I just leave for you &lt;a href="https://gist.github.com/luafanti/df3116022157cabd516ccd26cb8f7565" rel="noopener noreferrer"&gt;Installation Gist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Firstly, clone my &lt;a href="https://github.com/luafanti/grafana-loki-stack-helmfile" rel="noopener noreferrer"&gt;repo with Lok stack helmfile&lt;/a&gt;, and check how little configuration is needed to install all stuff. This is because the Loki stack installation comes with reasonably safe defaults whenever possible, so we have only to overwrite some crucial values.&lt;br&gt;
To install it we need to exec single command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helmfile apply &lt;span class="nt"&gt;-i&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After a short while, you should see a message that you have successfully installed three releases.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

UPDATED RELEASES:
NAME                 CHART                             VERSION
loki-stack           grafana-labs/loki-stack             2.8.9
grafana-dashboards   local-charts/grafana-dashboards     1.0.0
demo                 luafanti/spring-debug-app           1.0.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Explore logs in Grafana and understand Promtail scraping
&lt;/h2&gt;

&lt;p&gt;Establish a tunnel to Grafana and check if preinstalled dashboards with logs can show data. If you want to know how my local Chart &lt;code&gt;grafana-dashboards&lt;/code&gt;adds dashboards to Grafana, I refer you to the &lt;a href="https://dev.to/luafanti/spring-boot-monitoring-with-prometheus-operator-40g1#where-are-grafana-dashboards-installed"&gt;paragraph in my previous article.&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# establish port to Grafana Service&lt;/span&gt;
kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; logging svc/loki-stack-grafana 3000:80

&lt;span class="c"&gt;# get Grafana Credentials from Secrets&lt;/span&gt;
kubectl get secrets &lt;span class="nt"&gt;-n&lt;/span&gt; logging loki-stack-grafana &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{index .data "admin-password" | base64decode}}'&lt;/span&gt;
kubectl get secrets &lt;span class="nt"&gt;-n&lt;/span&gt; logging loki-stack-grafana &lt;span class="nt"&gt;--template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{{index .data "admin-user" | base64decode}}'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should be able to see logs like below in custom dashboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d7n0hfh1qsseflwagu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d7n0hfh1qsseflwagu2.png" alt="Grafana logging dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to custom variables that use labels, we can create various filters for the dashboard. You can look up my configuration of variables and extend it with an analogy way for your own needs. At the top, I marked the filter with detected pods in selected namespace. In the lower part, you can see a preview of all labels that are associated with a single log line. Most labels are meta information that Promtail adds during scraping targets. &lt;a href="https://github.com/luafanti/grafana-loki-stack-helmfile/blob/83e2cf889776b481bcd518f78abc8d8acd1d07e6/vars/loki-stack.yaml#L46" rel="noopener noreferrer"&gt;This part&lt;/a&gt; of the Promtail configuration provides it. In this section, I also marked a few labels that not comes out-of-the box e.g. &lt;code&gt;leavel&lt;/code&gt; , &lt;code&gt;class&lt;/code&gt; , &lt;code&gt;thread&lt;/code&gt; . We added these labels using the &lt;a href="https://grafana.com/docs/loki/latest/clients/promtail/stages/json/" rel="noopener noreferrer"&gt;Promtail json stage&lt;/a&gt;. You need to know that Promtail processes scraped logs in a pipeline. A pipeline is comprised of a set of stages. &lt;code&gt;json&lt;/code&gt; stage is a parsing stage that reads the log line as JSON and accepts &lt;a href="http://jmespath.org/" rel="noopener noreferrer"&gt;JMESPath&lt;/a&gt; expressions to extract data.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-config&lt;/span&gt;
  &lt;span class="na"&gt;pipeline_stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;expressions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;timestamp&lt;/span&gt;
          &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;level&lt;/span&gt;
          &lt;span class="na"&gt;thread&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;thread&lt;/span&gt;
          &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;logger&lt;/span&gt;
          &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;message&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;context&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;level&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;thread&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RFC3339&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;timestamp&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;message&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;One very important thing❗. As you can see in the config, above &lt;code&gt;json&lt;/code&gt; stage I have &lt;code&gt;docker&lt;/code&gt; stage added. &lt;a href="https://grafana.com/docs/loki/latest/clients/promtail/stages/docker/" rel="noopener noreferrer"&gt;This stage&lt;/a&gt; can read properly logs from Kubernetes with &lt;code&gt;docker&lt;/code&gt; &lt;a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="noopener noreferrer"&gt;container runtime&lt;/a&gt;. If nodes from your cluster use different container runtime e.g. &lt;code&gt;containerd&lt;/code&gt; (quite popular for managed Kubernetes clusters e.g. EKS, AKS) you have to replace this stage with &lt;code&gt;cri&lt;/code&gt; &lt;a href="https://grafana.com/docs/loki/latest/clients/promtail/stages/cri/" rel="noopener noreferrer"&gt;stage&lt;/a&gt;. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-config&lt;/span&gt;
  &lt;span class="na"&gt;pipeline_stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;### rest of config&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;BTW, &lt;code&gt;docker&lt;/code&gt; container runtime has already become &lt;a href="https://kubernetes.io/blog/2020/12/02/dockershim-faq/" rel="noopener noreferrer"&gt;deprecated&lt;/a&gt;. His successor became &lt;code&gt;containerd&lt;/code&gt;. It was designed by Docker as well. Containerd is simplified compared to its predecessor - offers a minimum set of functionality for managing images and executing containers on a node.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dealing with multi-line stack traces of Java
&lt;/h2&gt;

&lt;p&gt;If you have ever worked with Java application logs you know that stack traces are painful. Not only because they are often misunderstood, but also because they are multi-line. When a multi-line event is written to a log output, Promtail and also many other scrapers, will take each row as its own entry and send the separately to Loki. I have a good news for you. The prepared configuration solves this problem thanks to Spring Boot JSON log output and appropriate Promtail configuration. You can verify it easily by following the steps.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# establish tunnel to Spring Boot Service&lt;/span&gt;
kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; sandbox svc/demo-spring-debug-app 8080:8080

&lt;span class="c"&gt;# call Spring Boot debug endpoint to produce exception log&lt;/span&gt;
curl http://localhost:8080/logs/exception


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Back to Grafana and verify if stack traces are printed correctly. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8480rwogsxfzuar6m617.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8480rwogsxfzuar6m617.png" alt="Grafana multi-line stack traces logging"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing words
&lt;/h2&gt;

&lt;p&gt;In this and &lt;a href="https://dev.to/luafanti/spring-boot-monitoring-with-prometheus-operator-40g1"&gt;the previous article&lt;/a&gt;, we covered two very important aspects of application observability - monitoring and logging. The only thing missing for the full trinity is tracing. Tracing is especially important in microservices or nanoservice (Serverless) architecture. In this area, Grafana Labs also has an interesting solution that is worth checking out - &lt;a href="https://grafana.com/oss/tempo/" rel="noopener noreferrer"&gt;Tempo&lt;/a&gt;. In the next part of this series, I will try to introduce this tool in a similar way.&lt;/p&gt;

</description>
      <category>logging</category>
      <category>loki</category>
      <category>grafana</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Spring Boot monitoring with Prometheus Operator</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Fri, 30 Dec 2022 10:19:29 +0000</pubDate>
      <link>https://dev.to/luafanti/spring-boot-monitoring-with-prometheus-operator-40g1</link>
      <guid>https://dev.to/luafanti/spring-boot-monitoring-with-prometheus-operator-40g1</guid>
      <description>&lt;p&gt;In this article, we will install a Prometheus Operator that will automatically detect targets for monitoring. If you have used Prometheus before, either without the Operator or outside of Kubernetes, you will see how the Operator and its CRDs can make Prometheus flexible and how many things can happen magically leverages Kubernetes capabilities. &lt;/p&gt;

&lt;p&gt;We will use Spring Boot application in demo. However, you will be able to configure any other app following this article. If your stack isn’t a Spring Boot just skip the first paragraph&lt;/p&gt;

&lt;h2&gt;
  
  
  Prepare Spring Boot to expose Prometheus metrics
&lt;/h2&gt;

&lt;p&gt;My demo app (&lt;a href="https://github.com/luafanti/spring-boot-debug-app" rel="noopener noreferrer"&gt;GitHub Link&lt;/a&gt;) uses Spring Boot version 3, or more precisely the latest release from 2022, i.e. &lt;strong&gt;3.0.1&lt;/strong&gt;. The core monitoring component in Spring Boot is &lt;a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html" rel="noopener noreferrer"&gt;Actuator&lt;/a&gt;. If you remember the migration of Spring Boot from version 1 to 2, you’ll probably remember that update brought a lot of breaking changes in Actuator. Fortunately, in the case of version 3, no such changes have been made, so you can apply the following configurations to Spring Boot version 2.x.x&lt;/p&gt;

&lt;p&gt;To expose metrics consumable for Prometheus, you need to add two dependencies. The first one enables Actuator features, the second one is &lt;a href="https://micrometer.io/docs/registry/prometheus" rel="noopener noreferrer"&gt;Prometheus exporter&lt;/a&gt; by Micrometer.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;implementation(&lt;/span&gt;&lt;span class="s2"&gt;"org.springframework.boot:spring-boot-starter-actuator"&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;runtimeOnly(&lt;/span&gt;&lt;span class="s2"&gt;"io.micrometer:micrometer-registry-prometheus"&lt;/span&gt;&lt;span class="err"&gt;)&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All you have to configure to enable default metrics is to provide the below configuration. As you can see, I expose entire Actuator with another port number. It is good practice to separate on the port level the business layer from the technical stuff.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;management&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8081&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;exposure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;include&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;health,info,metrics,prometheus"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;From now Spring Boot metrics in Prometheus format should be visible on &lt;code&gt;http://localhost:8081/actuator/prometheus&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prometheus Operator
&lt;/h2&gt;

&lt;p&gt;Kubernetes operators are applications that automate installation and configuration (&lt;strong&gt;Day-1 tasks&lt;/strong&gt;) and scaling, upgrades, backups, recovery, etc. (&lt;strong&gt;Day-2 tasks&lt;/strong&gt;) for stateful applications. We can say that Operators can replace part of manual administrator work. Under the hood, operators work in &lt;strong&gt;reconciliation loop&lt;/strong&gt; (watch for changes in the application state) and use &lt;strong&gt;CRDs&lt;/strong&gt; to extend the Kubernetes API. Generally speaking, it is the operational knowledge of a specific software contained in the &lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers" rel="noopener noreferrer"&gt;custom controller&lt;/a&gt; code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/prometheus-operator/prometheus-operator" rel="noopener noreferrer"&gt;Prometheus Operator&lt;/a&gt; is an independent project from the Prometheus project. I know, it can lead to confusion. In the official &lt;a href="https://github.com/prometheus-operator/prometheus-operator#prometheus-operator-vs-kube-prometheus-vs-community-helm-chart" rel="noopener noreferrer"&gt;README&lt;/a&gt; you can find short comparison. Basically, Prometheus Operator does what an operator should do - provides Kubernetes native deployment and management of Prometheus and related monitoring components like Grafana or Alert Manager. &lt;/p&gt;

&lt;h3&gt;
  
  
  Quick installation with helmfile
&lt;/h3&gt;

&lt;p&gt;If you haven't used &lt;a href="https://github.com/helmfile/helmfile" rel="noopener noreferrer"&gt;helmfile&lt;/a&gt; yet, I strongly encourage you to check out this tool. It provides a lot of improvements for working with Helm charts, but you don't need to go into all of them. You can easily switch and streamline your helm releases with helmfile and gain at the beginning one killer feature - interactive helm diff that works as Terraform plan. &lt;a href="https://gist.github.com/luafanti/df3116022157cabd516ccd26cb8f7565" rel="noopener noreferrer"&gt;Installation Gist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Firstly, clone my &lt;a href="https://github.com/luafanti/prometheus-operator-helmfile" rel="noopener noreferrer"&gt;GitHub repo with Prometheus Operator helmfile&lt;/a&gt;, and check how little configuration is needed to install all stuff. This is because the Prometheus Operator installation comes with reasonably safe defaults whenever possible, so we have only to overwrite some crucial values.&lt;br&gt;
To install it we need to exec single command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helmfile apply &lt;span class="nt"&gt;-i&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Flag &lt;code&gt;-i&lt;/code&gt; apply interactive mode. Helmfile will ask for confirmation before attempting to modify cluster state. With the first installation, you will probably see a very loooooong diff status, so it won't be very useful. The power of this feature becomes apparent as you start adding small changes in your releases - the same as with Terraform.  &lt;/p&gt;

&lt;p&gt;After a short while, you should see a message that you have successfully installed three releases.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

UPDATED RELEASES:
NAME                    CHART                                        VERSION
kube-prometheus-stack   prometheus-community/kube-prometheus-stack    43.2.0
grafana-dashboards      local-charts/grafana-dashboards                1.0.0
demo                    luafanti/spring-debug-app                      1.0.0


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Establish tunnel to Grafana and check if preinstalled dashboards show some data.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring svc/kube-prometheus-stack-grafana 3000:80


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should be able to see 3 dashboard directories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bbjwz7l9acl926kclxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bbjwz7l9acl926kclxg.png" alt="Grafana predefined dashboards"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First one - &lt;code&gt;General&lt;/code&gt; is preinstalled together with Prometheus Operator, the remaining comes from &lt;a href="https://github.com/luafanti/prometheus-operator-helmfile/tree/main/local-charts/grafana-dashboards/dashboards" rel="noopener noreferrer"&gt;local helm chart&lt;/a&gt;. This is the path where you can add any Grafana dashboard as a json file and install it along with whole stack. If you want to add your own dashboards, I recommend a method where you first import/create the chart in the Grafana UI, then export it as a json file, and then add it to the project. Thanks to this you will avoid the problem with missed datasource.&lt;/p&gt;

&lt;p&gt;I could end this post here. We managed to install what we wanted so the goal was achieved &lt;strong&gt;🎉 🎯&lt;/strong&gt;. However, let me briefly explain the most interesting things that happen underneath.&lt;/p&gt;

&lt;h3&gt;
  
  
  How metrics flow from Spring Boot application to Grafana?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8hw3drgxe0r4c7cy00l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk8hw3drgxe0r4c7cy00l.png" alt="Metrics flow from Spring Boot via Prometheus to Grafana"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to Spring Boot Actuator project exposing operational information become trivial. As you can above, all metrics are exposed under a separate port 8081. Thanks to this, we have a dedicated gateway that we can open only for Prometheus. Actuator extended &lt;a href="https://micrometer.io/docs/registry/prometheus" rel="noopener noreferrer"&gt;Prometheus exporter&lt;/a&gt; by Micrometer added dedicated endpoint where publish application metrics in Prometheus format &lt;code&gt;/actuator/prometheus&lt;/code&gt;. We'll configure this endpoint to be polled by Prometheus (&lt;strong&gt;scraped&lt;/strong&gt;) to fetch metrics and store them in its database. Note that in the Spring Boot app I added an &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/main/src/main/kotlin/com/luafanti/debug/config/MonitoringConfig.kt" rel="noopener noreferrer"&gt;additional label configuration&lt;/a&gt;. This adds &lt;code&gt;application=spring-boot-demo&lt;/code&gt; label to every single metric. The label will be used in preinstalled Grafana dashboard as one of filter variable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo97b4uudwp8j4tkc8t8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo97b4uudwp8j4tkc8t8g.png" alt="Grafana variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You need to know that Prometheus stores all data as time series. Every time series is uniquely identified by its &lt;strong&gt;metric name&lt;/strong&gt; and optional &lt;strong&gt;labels&lt;/strong&gt;. Labels enable dimensional data model. Any combination of labels for the same metric name identifies a particular dimension of that metric. Thanks to the fact that Prometheus stores metrics in this way, tools such as Grafana have an advanced ability to filter the results and present them also in various dimensions.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Prometheus Operator discover endpoint to scrap?
&lt;/h3&gt;

&lt;p&gt;This is a fundamental question that should be bothering us. If you've worked with Kubernetes before, you probably know that Prometheus requires configuring all endpoints for scraping. In Kubernetes environment, where pods appear and disappear quite often, it is impossible to provide such a static config. This is where one of powerful part of Prometheus Operator comes into play - &lt;strong&gt;ServiceMonitor&lt;/strong&gt;. ServiceMonitor is one of Prometheus Operator CRDs. It defines a set of targets to be monitored by Prometheus. The Operator automatically generates scrape configuration based on that definition. Below you can see configuration responsible for defining ServiceMonitor for Spring Boot app.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;additionalServiceMonitors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-prometheus-stack-spring-boot&lt;/span&gt;
      &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;prometheus-monitoring&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;true'&lt;/span&gt;
      &lt;span class="na"&gt;namespaceSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;matchNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sandbox&lt;/span&gt;
      &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;management&lt;/span&gt;
          &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/actuator/prometheus&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It uses label selectors to define which Services to monitor, the namespaces to look for, and the port on which the metrics are exposed. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# check all installed ServiceMonitors&lt;/span&gt;
kubectl get servicemonitors.monitoring.coreos.com


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Besides our explicitly defined ServiceMonitor, the default installation of the Prometheus Operator creates several others. These are ServiceMonitors used to monitor the Kubernetes cluster, and Prometheus or Grafana instances itself.&lt;/p&gt;

&lt;p&gt;You can also view all targets defined by ServiceMonitor in the Prometheus UI.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; monitoring svc/kube-prometheus-stack-prometheus 9090:9090
chrome http://localhost:9090/targets


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facchrux3irts10tyaywm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Facchrux3irts10tyaywm.png" alt="Prometheus targets"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One very important thing!&lt;/strong&gt; Targets appear only when Prometheus finds a Service that has the appropriate labels - like &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/ca6697742cfcf156c42b47131951e6ad7d3979eb/infra/helm/templates/service.yaml#L8" rel="noopener noreferrer"&gt;here&lt;/a&gt; for my Spring Boot chart. If your Service doesn’t have the &lt;a href="https://github.com/luafanti/prometheus-operator-helmfile/blob/65eaabfe0c9f43487de23016934c38fcca95669f/vars/kube-prometheus-stack.yaml#L8" rel="noopener noreferrer"&gt;matched labels&lt;/a&gt;, you will not see the target marked as unavailable/down here, it simply will not appear here at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where are Grafana dashboards installed?
&lt;/h3&gt;

&lt;p&gt;When Grafana starts, it will update/insert all dashboards available in the configured path. Dashboards are provided under this path with help of sidecar container. Sidecar watch for new dashboards defined as a ConfigMap, then add a dashboard dynamically without restarting the pod. Below you can check the suitable configuration.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

grafana:
  sidecar:
    dashboards:
      enabled: &lt;span class="nb"&gt;true
      &lt;/span&gt;label: grafana_dashboard
      folder: /tmp/dashboards
      provider:
        foldersFromFilesStructure: &lt;span class="nb"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is the first part of setup. We still need to somehow provide the definitions of our predefined dashboards as ConfigMap. For this I just created a &lt;a href="https://github.com/luafanti/prometheus-operator-helmfile/tree/main/local-charts/grafana-dashboards" rel="noopener noreferrer"&gt;local helm chart&lt;/a&gt; &lt;code&gt;grafana-dashboards&lt;/code&gt; . As you can see, the only object that creates this chat is the ConfigMap with dashboard definitions which Grafana sidcar reads and extract under configured path.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# get all ConfigMaps with dashboard definitions&lt;/span&gt;
kubectl get cm | &lt;span class="nb"&gt;grep &lt;/span&gt;grafana-dashboards

&lt;span class="c"&gt;# check if ConfigMaps are properly injected under configured path. You should see two dirs with predefined dashboards.&lt;/span&gt;
kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; kube-prometheus-stack-grafana-5f4976649d-w7q56 &lt;span class="nt"&gt;-c&lt;/span&gt; grafana &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;ls&lt;/span&gt; /tmp/dashboards


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>prometheus</category>
      <category>prometheusoperator</category>
      <category>grafana</category>
      <category>springboot</category>
    </item>
    <item>
      <title>Switching between multiple versions of various tools</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Thu, 15 Dec 2022 11:48:55 +0000</pubDate>
      <link>https://dev.to/luafanti/switching-between-multiple-versions-of-various-tools-3g1c</link>
      <guid>https://dev.to/luafanti/switching-between-multiple-versions-of-various-tools-3g1c</guid>
      <description>&lt;p&gt;I'm sure you've worked on different projects that used different versions of tools despite having the same technology stack. Whether you work as a frontend dev, backend dev, or DevOps, this problem can happen anywhere. Switching between versions of various tools can be tricky and painful. In this article, I decided to put together tools that will help reduce this pain.&lt;br&gt;
I googled how to do it whenever I needed to switch between versions. This article I'll treat as a private cheat sheet. If you use the following tools on a daily basis, I encourage you to save this post and treat it as I do.&lt;/p&gt;




&lt;h3&gt;
  
  
  Switching Terraform version - &lt;a href="https://github.com/tfutils/tfenv"&gt;tfenv&lt;/a&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Mac OS&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;tfenv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Basic commands
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# list all installed terraform versions&lt;/span&gt;
tfenv list

&lt;span class="c"&gt;# list all available terraform versions for installation&lt;/span&gt;
tfenv list-remote

&lt;span class="c"&gt;# install selected terraform version&lt;/span&gt;
tfenv &lt;span class="nb"&gt;install &lt;/span&gt;1.3.6

&lt;span class="c"&gt;# switch to installed terraform version&lt;/span&gt;
tfenv use 1.3.6

&lt;span class="c"&gt;# print currently set terraform version&lt;/span&gt;
tfenv version-name

&lt;span class="c"&gt;# uninstall selected terraform version&lt;/span&gt;
tfenv uninstall 1.0.11
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Switching Java, Maven, and Gradle version  - &lt;a href="https://github.com/sdkman/sdkman-cli"&gt;sdkman&lt;/a&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://get.sdkman.io | bash
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bash_profile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Basic commands
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# list all installed and installable versions&lt;/span&gt;
sdk list java
sdk list maven
sdk list gradle

&lt;span class="c"&gt;# install selected version&lt;/span&gt;
sdk &lt;span class="nb"&gt;install &lt;/span&gt;java 17.0.5-zulu
sdk &lt;span class="nb"&gt;install &lt;/span&gt;maven 3.8.6
sdk &lt;span class="nb"&gt;install &lt;/span&gt;gradle 7.6

&lt;span class="c"&gt;# switch to installed version&lt;/span&gt;
sdk use java 17.0.5-zulu
sdk use maven 3.8.6
sdk use gradle 7.6

&lt;span class="c"&gt;# print currently set version&lt;/span&gt;
sdk current java
sdk current maven
sdk current gradle

&lt;span class="c"&gt;# uninstall selected version&lt;/span&gt;
sdk uninstall java 8.0.352-zulu
sdk uninstall maven 3.6.0
sdk uninstall gradle 7.4

&lt;span class="c"&gt;# list all tools whose versions sdkman can manage&lt;/span&gt;
sdk list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Switching Node &amp;amp; npm version - &lt;a href="https://github.com/nvm-sh/nvm"&gt;nvm&lt;/a&gt;
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Installation
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Mac OS&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;nvm
&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/.nvm

&lt;span class="c"&gt;# support for Oh My ZSH&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"export NVM_DIR=~/.nvm&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;source &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;(brew --prefix nvm)/nvm.sh"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.zshrc
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.zshrc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Basic commands
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# list all installed node versions&lt;/span&gt;
nvm &lt;span class="nb"&gt;ls&lt;/span&gt;

&lt;span class="c"&gt;# list all available node versions for installation&lt;/span&gt;
nvm ls-remote

&lt;span class="c"&gt;# install selected node version and switch to them&lt;/span&gt;
nvm &lt;span class="nb"&gt;install &lt;/span&gt;v19.2.0

&lt;span class="c"&gt;# install latest LTS node version&lt;/span&gt;
nvm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--lts&lt;/span&gt;

&lt;span class="c"&gt;# switch to installed node version&lt;/span&gt;
nvm use v19.2.0

&lt;span class="c"&gt;# print currently set node version&lt;/span&gt;
nvm current

&lt;span class="c"&gt;# uninstall selected node version&lt;/span&gt;
nvm uninstall v10.15.3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;I will expand this article when I discover other tools this type. If you use any other version management tools, please let me know in the comment, I'd be happy to take a look.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>jdk</category>
      <category>node</category>
    </item>
    <item>
      <title>Subjective comparison of Security Testing products. Sonatype vs JFrog vs Snyk</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Mon, 12 Dec 2022 15:21:17 +0000</pubDate>
      <link>https://dev.to/luafanti/subjective-comparison-of-security-testing-products-sonatype-vs-jfrog-vs-snyk-7d1</link>
      <guid>https://dev.to/luafanti/subjective-comparison-of-security-testing-products-sonatype-vs-jfrog-vs-snyk-7d1</guid>
      <description>&lt;p&gt;For many years now, it has been impossible to imagine building solutions not relying on open source. In fact, every project I've worked on has benefited more or less from community development. This trend doesn’t apply only to product companies and start-ups. Large financial institutions and other critical sectors are also reaping the benefits of open source. The State of Open Source report by OpenLogic &amp;amp; Open Source Initiative, among others, confirms this statement.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vtDDBKeW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zlcc8e5nbtmsu23dal4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vtDDBKeW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6zlcc8e5nbtmsu23dal4.png" alt="Open source usage in companies" width="519" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The 2022 State of Open Source Report Open Source Usage, Market Trends, &amp;amp; Analysis by&lt;/em&gt; OpenLogic &amp;amp; Open Source Initiative&lt;/p&gt;

&lt;p&gt;Such extensive use of open source can lead to some problems. Anyone who has worked with &lt;code&gt;npm&lt;/code&gt; or &lt;code&gt;maven&lt;/code&gt; based applications knows this. Developers, including myself, are often tempted to rely on external libraries and tools. This makes list of dependencies grow and grow, and it isn't easy to keep track of it. This raises the issue of trust in these dependencies. At some point, our project/product will need to generate reports with vulnerabilities and licenses of dependencies.&lt;/p&gt;

&lt;p&gt;As software engineers, we should care about the quality and security of our solutions. For this reason, we should address this topic as early as possible in the Software development lifecycle (SDLC). &lt;/p&gt;

&lt;p&gt;I have recently been researching and evaluating the most popular commercial products for open source security scanning - &lt;strong&gt;Sonatype, JFrog, Snyk&lt;/strong&gt;. I have decided to bring all my outcomes together in this article. The comparison will be subjective and will refer to aspects that were crucial in my use case.&lt;/p&gt;

&lt;p&gt;The technology stack of my project - a key aspect for further comparison:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Java/Kotlin microservices&lt;/li&gt;
&lt;li&gt;Docker images as artifacts&lt;/li&gt;
&lt;li&gt;GitHub as code repositories &amp;amp; CI/CD&lt;/li&gt;
&lt;li&gt;Kubernetes&lt;/li&gt;
&lt;li&gt;IaaC with Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under this &lt;a href="https://miro.com/app/board/uXjVP8v3Ehg=/"&gt;Miro Link&lt;/a&gt; you will find the same table colored and better formatted than in limited Markdown. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Sonatype&lt;/th&gt;
&lt;th&gt;JFrog&lt;/th&gt;
&lt;th&gt;Snyk&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SaaS&lt;/td&gt;
&lt;td&gt;No SaaS option. Planned for next years.&lt;/td&gt;
&lt;td&gt;Available.&lt;/td&gt;
&lt;td&gt;Available.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Self-hosted&lt;/td&gt;
&lt;td&gt;Available.&lt;/td&gt;
&lt;td&gt;Available.&lt;/td&gt;
&lt;td&gt;Not available.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Free plan&lt;/td&gt;
&lt;td&gt;Not available.&lt;/td&gt;
&lt;td&gt;Available and sufficient for small projects but only for SaaS.&lt;/td&gt;
&lt;td&gt;Sufficient for PoC and private home projects.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pricing&lt;/td&gt;
&lt;td&gt;Very high even for medium-sized teams due to minimum number of licenses required.&lt;/td&gt;
&lt;td&gt;Reasonable for medium-sized teams. High cost with full package and more licenses.&lt;/td&gt;
&lt;td&gt;Reasonable on any scale.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Licensing&lt;/td&gt;
&lt;td&gt;Very complicated. A large number of tools and licenses create a confusing ecosystem. It's hard to find clear information in the documentation. Without contacting sales team, it is practically impossible to estimate the final price.Not possible to test anything without a trial license. To get it you have to arrange a series of meetings with sales (3 in my case) to starts with PoC - it costs a lot of time.&lt;/td&gt;
&lt;td&gt;More complicated than Snyk due to several installations options (SaaS, Self-hosted) and a list of additional sub-products. Possibility to try a paid version after contacting the sales team.&lt;/td&gt;
&lt;td&gt;The most transparent pricing. Possibility to test a paid plan for free without providing credit card and contact the sales team.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker images support&lt;/td&gt;
&lt;td&gt;Not directly supported. In order to scan Docker image, it must be saved as tar archive. Link&lt;/td&gt;
&lt;td&gt;Supported.&lt;/td&gt;
&lt;td&gt;Supported.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CI/CD integrations&lt;/td&gt;
&lt;td&gt;Not too good support. A lot of samples and instruction for old-fashioned Jenkins in offcial docs. For GitHub only not mantained community action. However, the list of supported CI/CD is growing, so I hope this will change in the future.&lt;/td&gt;
&lt;td&gt;limited list of integrations. As for GitHub, you can use the official GitHub Action with the built-in JFrog CLI.&lt;/td&gt;
&lt;td&gt;Support for most fo modern CI/CD’s including GitHub offcial Action and AWS Code Build.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;IDE integrations&lt;/td&gt;
&lt;td&gt;Provided for IntelliJ, VSC, and Eclipse with few shortcomings like worse dependency filtering for gradle projects.&lt;/td&gt;
&lt;td&gt;Provided for VSC, most of JetBrains IDEs, and others niche players. For IntelliJ it has some nice features.&lt;/td&gt;
&lt;td&gt;Provided for VSC, VS, JetBrains, and Eclipse. In IntelliJ very convenient and clear. Support also static code analysis.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CLI support&lt;/td&gt;
&lt;td&gt;Available with few shortcomings. You have to always pass credentials to each command. Output returns a link to rich report in IQ Server but can’t generate well-formatted summary in CLI output.&lt;/td&gt;
&lt;td&gt;Very powerful JFrog CLI. Great well-formatted summary of scanning in console output. Poor CLI documentation on the website.&lt;/td&gt;
&lt;td&gt;Powerful CLI. Authentication with a Snyk account. CLI scan can export result in SARIF format!&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vulnerabilites information&lt;/td&gt;
&lt;td&gt;Rich and well structured information about vulnerabilities. Usefull data about risk and the attack vector. Info how to determine if you are vulnerable. Recomendations how to fix or work around issue. Flagship copmonent called Version graph.&lt;/td&gt;
&lt;td&gt;Has all important information about risk, attack vectors, and advices how to deal with vulnerability. Information is clear.&lt;/td&gt;
&lt;td&gt;Probably the least informative of the other tools. However, there is everything that is most important. When it comes to the presentation layer, there aren’t too many bells and whistles here. It may be less readable due to small font used.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open soruce licence scanning&lt;/td&gt;
&lt;td&gt;Extensive possibility of managing licences. Black-listing of individual licences, manual verification, etc.&lt;/td&gt;
&lt;td&gt;Just like Soantype, a wide range of license management options.&lt;/td&gt;
&lt;td&gt;Very similar solution as in other tested tools. It seems to be poorer and less complicated, but therefore easier to use.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;File reports&lt;/td&gt;
&lt;td&gt;Possibility to generate PDF Report. Too bad you can't export to HTML. I can’t find option to generate ad-hoc report for more than one artifact/repo.&lt;/td&gt;
&lt;td&gt;Possibility to generate PDF, JSON, or CSV report. No HTML option. You can include in report more than one project/artifact and apply advanced filters.&lt;/td&gt;
&lt;td&gt;In Snyk web UI you can only export as CSV. CLI can produced JSON or SARIF scaning results.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Notifications &amp;amp; alerting&lt;/td&gt;
&lt;td&gt;Limited number of built-in integrations for notification -  only Email and JIRA. Fortunately, it is possible to configure Webhook.&lt;/td&gt;
&lt;td&gt;Only Email &amp;amp; Webhook notifications.&lt;/td&gt;
&lt;td&gt;Possible to setup Email alerting, built-in Slack notifications, custom Webhook or even JIRA integration.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Static code analysis&lt;/td&gt;
&lt;td&gt;Not supported.&lt;/td&gt;
&lt;td&gt;Not supported.&lt;/td&gt;
&lt;td&gt;Supported with Snyk Code. Kotlin language is still in beta. With Snyk IaC you can also scan Kubernetes and Terraform configuration files.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web browser UX&lt;/td&gt;
&lt;td&gt;Mixed feelings. On the one hand, the new UI looks clear and not hurts eyes, on the other hand, it often took me a long time to find something. I think the main tabs could be better named and grouped. Somewhere you can still enable the old UI.&lt;/td&gt;
&lt;td&gt;My first impression was good, but I tested JFrog first. Perhaps, if I had the opportunity to test it again I would have a more reliable opinion.&lt;/td&gt;
&lt;td&gt;Well thought-out and intuitive. My only objection is to the clarity and smallness of the fonts.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Documentation&lt;/td&gt;
&lt;td&gt;Poor impressions. In the documentation, you can find a lot of old, deprecated-looking content. Googling to find something, we often end up on a marketing page rather than proper documentation page.&lt;/td&gt;
&lt;td&gt;As with Sonatype, the documentation looks poor. It's hard to find something navigating through their pages. Many articles, but  poorly organized.&lt;/td&gt;
&lt;td&gt;A great example of how documentation should look. Truly one of the best I've dealt with. Well structured. Navigating step by step from the most important things to the details. Lots of graphics and what developers like the most - code samples.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;It is easy to see in the &lt;a href="https://miro.com/app/board/uXjVP8v3Ehg=/"&gt;colored table&lt;/a&gt; that &lt;strong&gt;Snyk&lt;/strong&gt; won my sympathy. IMHO, Snyk fits modern projects based on microservices, Docker, and CI/CD ala GitHub Actions. &lt;strong&gt;Sonatype&lt;/strong&gt; turned out to be the least suitable for my project. However, I wouldn’t completely reject this tool. I think it has its advantages which you will see in projects that use Jenkins or private package repositories.&lt;/p&gt;

&lt;p&gt;In the end, however, we didn't choose it. Instead, we decided to build our own security scanning pipeline based on open source solutions such as &lt;code&gt;Trivy&lt;/code&gt; or &lt;code&gt;Syft&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As I mentioned earlier, the above comparison is my personal assessment. My conclusions are very subjective as I did this research to find the right tool for my project. For a different tech stack, perhaps the conclusions could be different. I made the tool comparison in August 2022, so if you are reading this after a long time, know that some things may be out of date.&lt;/p&gt;

</description>
      <category>snyk</category>
      <category>jfrog</category>
      <category>sonatype</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Injecting secrets from Vault into Helm charts with ArgoCD</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Wed, 07 Dec 2022 06:34:06 +0000</pubDate>
      <link>https://dev.to/luafanti/injecting-secrets-from-vault-into-helm-charts-with-argocd-49k</link>
      <guid>https://dev.to/luafanti/injecting-secrets-from-vault-into-helm-charts-with-argocd-49k</guid>
      <description>&lt;p&gt;Managing secrets in Kubernetes isn’t a trivial topic. As is usual with Kubernetes, there are always many ways to achieve the desired goal and it’s often a problem to choose the right one for our case. In this article, I will show you one of ways to use ArgoCD with the help of &lt;a href="https://argocd-vault-plugin.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;Vault Plugin&lt;/a&gt; to inject secrets into Helm Charts. So, as you can guess, this may be one of the best ways to inject secrets if you are already using ArgoCD and Vault in your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;The basic knowledge of Kubernetes and Helm is obvious here. For ArgoCD and Vault I will try to guide you step by step in this article.&lt;br&gt;
We will use the CLI wherever possible, so I recommend you install all the following:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# Helm&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;helm

&lt;span class="c"&gt;# Vault&lt;/span&gt;
brew tap hashicorp/tap
brew &lt;span class="nb"&gt;install &lt;/span&gt;hashicorp/tap/vault

&lt;span class="c"&gt;#ArgoCD&lt;/span&gt;
brew &lt;span class="nb"&gt;install &lt;/span&gt;argocd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;For structure, we will create 2 namespaces. The first one - technical, will be for Vault and ArgoCD instances, in the second one, we will install target Helm charts with injected secrets.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# namespace for Vault &amp;amp; ArgoCD&lt;/span&gt;
kubectl create ns toolbox

&lt;span class="c"&gt;# namespace for resoruces installed by ArgoCD&lt;/span&gt;
kubectl create ns sandbox


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I also encourage you to install &lt;a href="https://github.com/ahmetb/kubectx" rel="noopener noreferrer"&gt;kubectx + kubens&lt;/a&gt; to navigate Kubernetes easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vault installation
&lt;/h2&gt;

&lt;p&gt;For the beginning select &lt;code&gt;toolboox&lt;/code&gt; namespace&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubens toolbox


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To install Vault we will use the official &lt;a href="https://github.com/hashicorp/vault-helm" rel="noopener noreferrer"&gt;Helm chart&lt;/a&gt; provided by HashiCorp. For simplicity, install it in developer mode. In dev mode, Vault doesn't need to be initialized or unsealed, but remember, it's only for development or experimentation. Never, ever run a dev mode in production&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

helm &lt;span class="nb"&gt;install &lt;/span&gt;vault hashicorp/vault &lt;span class="nt"&gt;--set&lt;/span&gt; &lt;span class="s2"&gt;"server.dev.enabled=true"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Vault can be configured via HTTP API, UI, or CLI. To operate Vault from local CLI establish port forwarding to &lt;code&gt;vault-0&lt;/code&gt; Pod, and setup Vault server address for already installed Vault CLI. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# port forwarding in separate terminal window&lt;/span&gt;
kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; toolbox vault-0 8200

&lt;span class="c"&gt;# login into Vault. Use 'root' token to authenticate into Vault&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VAULT_ADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://127.0.0.1:8200
vault login 



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;I also encourage you to explore browser UI interface of Vault. You have to use this same &lt;code&gt;root&lt;/code&gt; token generated in dev mode. &lt;/p&gt;

&lt;h2&gt;
  
  
  Vault setup
&lt;/h2&gt;

&lt;p&gt;Vault uses &lt;a href="https://developer.hashicorp.com/vault/docs/secrets" rel="noopener noreferrer"&gt;Secrets Engines&lt;/a&gt; to store, generate, or encrypt data. The basic Secret Engine for storing static secrets is &lt;a href="https://developer.hashicorp.com/vault/docs/secrets/kv/kv-v2" rel="noopener noreferrer"&gt;Key-Value&lt;/a&gt; engine. Let’s create one sample secret that we’ll inject later into Helm Charts.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# enable kv-v2 engine in Vault&lt;/span&gt;
vault secrets &lt;span class="nb"&gt;enable &lt;/span&gt;kv-v2

&lt;span class="c"&gt;# create kv-v2 secret with two keys&lt;/span&gt;
vault kv put kv-v2/demo &lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"secret_user"&lt;/span&gt; &lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"secret_password"&lt;/span&gt;

&lt;span class="c"&gt;# create policy to enable reading above secret&lt;/span&gt;
vault policy write demo - &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
path "kv-v2/data/demo" {
  capabilities = ["read"]
}
&lt;/span&gt;&lt;span class="no"&gt;EOF


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now we need to create a role that will authenticate ArgoCD in Vault. We said that Vault has Secrets Engines component. &lt;a href="https://developer.hashicorp.com/vault/docs/auth" rel="noopener noreferrer"&gt;Auth methods&lt;/a&gt; are another type of component in Vault but for assigning identity and a set of policies to user/app.  As we are using Kubernetes platforms, we need to focus on &lt;a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes" rel="noopener noreferrer"&gt;Kubernetes Auth Method&lt;/a&gt; to configure Vault accesses. Let’s configure this auth method.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# enable Kubernetes Auth Method&lt;/span&gt;
vault auth &lt;span class="nb"&gt;enable &lt;/span&gt;kubernetes

&lt;span class="c"&gt;# get Kubernetes host address&lt;/span&gt;
&lt;span class="nv"&gt;K8S_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt; kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;env&lt;/span&gt; | &lt;span class="nb"&gt;grep &lt;/span&gt;KUBERNETES_PORT_443_TCP_ADDR| &lt;span class="nb"&gt;cut&lt;/span&gt; &lt;span class="nt"&gt;-f2&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="s1"&gt;'='&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;:443"&lt;/span&gt;

&lt;span class="c"&gt;# get Service Account token from Vault Pod&lt;/span&gt;
&lt;span class="nv"&gt;SA_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;cat&lt;/span&gt; /var/run/secrets/kubernetes.io/serviceaccount/token&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# get Service Account CA certificate from Vault Pod&lt;/span&gt;
&lt;span class="nv"&gt;SA_CERT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nb"&gt;cat&lt;/span&gt; /var/run/secrets/kubernetes.io/serviceaccount/ca.crt&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# configure Kubernetes Auth Method&lt;/span&gt;
vault write auth/kubernetes/config &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;token_reviewer_jwt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$SA_TOKEN&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;kubernetes_host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$K8S_HOST&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nv"&gt;kubernetes_ca_cert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$SA_CERT&lt;/span&gt;

&lt;span class="c"&gt;# create authenticate Role for ArgoCD&lt;/span&gt;
vault write auth/kubernetes/role/argocd &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;bound_service_account_names&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;argocd-repo-server &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;bound_service_account_namespaces&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;toolbox &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;policies&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;demo &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nv"&gt;ttl&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;48h


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;That's all for now in Vault. Once you have created all components, you can try to find them in the browser interface &lt;a href="http://localhost:8200/" rel="noopener noreferrer"&gt;http://localhost:8200/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ArgoCD &amp;amp; Vault Plugin Installation
&lt;/h2&gt;

&lt;p&gt;Time for the main actor of this article - &lt;a href="https://github.com/argoproj-labs/argocd-vault-plugin" rel="noopener noreferrer"&gt;Argo CD Vault Plugin&lt;/a&gt; It will be responsible for injecting secrets from the Vault into Helm Charts. In addition to Helm Charts, this plugin can handle secret injections into pure Kubernetes manifests or &lt;code&gt;Kustomize&lt;/code&gt; templates. Here we will focus only on Helm Charts. Different sources required different installations, which you can find in plugin documentation.  &lt;/p&gt;

&lt;p&gt;What makes plugin documentation less clear is that it can be installed in two ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installation via &lt;code&gt;argocd-cm&lt;/code&gt; ConfigMap (old option, deprecated from version &lt;code&gt;2.6.0&lt;/code&gt; of ArgoCD)&lt;/li&gt;
&lt;li&gt;Installation via a sidecar container (new option, supported from version &lt;code&gt;2.4.0&lt;/code&gt; of ArgoCD)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since the old option will be not supported in future releases, I will install the ArgoCD Vault Plugin using a sidecar container. In order to properly install and configure ArgoCD, we need to follow a few steps:&lt;/p&gt;

&lt;p&gt;Before all make sure you are still in &lt;code&gt;toolbox&lt;/code&gt; namespace where we want to place Vault, ArgoCD, and all stuff for Vault plugin.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubens toolbox


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Create k8s &lt;code&gt;Secret&lt;/code&gt; with authorization configuration that Vault plugin will use.&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-vault-plugin-credentials&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;AVP_AUTH_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k8s"&lt;/span&gt;
  &lt;span class="na"&gt;AVP_K8S_ROLE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;argocd"&lt;/span&gt;
  &lt;span class="na"&gt;AVP_TYPE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vault"&lt;/span&gt;
  &lt;span class="na"&gt;VAULT_ADDR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://vault.toolbox:8200"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Make sure you set the proper Vault address and role name.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create k8s &lt;code&gt;ConfigMap&lt;/code&gt; with Vault plugin configuration that will be mounted in the sidecar container, and overwrite default processing of Helm Charts on ArgoCD. Look carefully at this configuration file. Under &lt;code&gt;init command&lt;/code&gt; you can see that we add Bitnami Helm repo and execute &lt;code&gt;helm dependency build&lt;/code&gt;. It is required if Charts installed by you use dependencies charts. You can customize or get rid of it if your Charts haven’t any dependencies.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cmp-plugin&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;plugin.yaml&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;apiVersion: argoproj.io/v1alpha1&lt;/span&gt;
    &lt;span class="s"&gt;kind: ConfigManagementPlugin&lt;/span&gt;
    &lt;span class="s"&gt;metadata:&lt;/span&gt;
      &lt;span class="s"&gt;name: argocd-vault-plugin-helm&lt;/span&gt;
    &lt;span class="s"&gt;spec:&lt;/span&gt;
      &lt;span class="s"&gt;allowConcurrency: true&lt;/span&gt;
      &lt;span class="s"&gt;discover:&lt;/span&gt;
        &lt;span class="s"&gt;find:&lt;/span&gt;
          &lt;span class="s"&gt;command:&lt;/span&gt;
            &lt;span class="s"&gt;- sh&lt;/span&gt;
            &lt;span class="s"&gt;- "-c"&lt;/span&gt;
            &lt;span class="s"&gt;- "find . -name 'Chart.yaml' &amp;amp;&amp;amp; find . -name 'values.yaml'"&lt;/span&gt;
      &lt;span class="s"&gt;init:&lt;/span&gt;
       &lt;span class="s"&gt;command:&lt;/span&gt;
          &lt;span class="s"&gt;- bash&lt;/span&gt;
          &lt;span class="s"&gt;- "-c"&lt;/span&gt;
          &lt;span class="s"&gt;- |&lt;/span&gt;
            &lt;span class="s"&gt;helm repo add bitnami https://charts.bitnami.com/bitnami&lt;/span&gt;
            &lt;span class="s"&gt;helm dependency build&lt;/span&gt;
      &lt;span class="s"&gt;generate:&lt;/span&gt;
        &lt;span class="s"&gt;command:&lt;/span&gt;
          &lt;span class="s"&gt;- bash&lt;/span&gt;
          &lt;span class="s"&gt;- "-c"&lt;/span&gt;
          &lt;span class="s"&gt;- |&lt;/span&gt;
            &lt;span class="s"&gt;helm template $ARGOCD_APP_NAME -n $ARGOCD_APP_NAMESPACE -f &amp;lt;(echo "$ARGOCD_ENV_HELM_VALUES") . |&lt;/span&gt;
            &lt;span class="s"&gt;argocd-vault-plugin generate -s toolbox:argocd-vault-plugin-credentials -&lt;/span&gt;
      &lt;span class="s"&gt;lockRepo: false&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Finally, we have to install ArgoCD from the official &lt;a href="https://github.com/argoproj/argo-helm" rel="noopener noreferrer"&gt;Helm Chart&lt;/a&gt; but with extra configuration that provides modifications required to install Vault plugin via sidecar container. &lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;repoServer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rbac&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;get&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;list&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;watch&lt;/span&gt;
      &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;secrets&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;configmaps&lt;/span&gt;
  &lt;span class="na"&gt;initContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;download-tools&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry.access.redhat.com/ubi8&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AVP_VERSION&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.11.0&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;sh&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;-c&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
          &lt;span class="s"&gt;curl -L https://github.com/argoproj-labs/argocd-vault-plugin/releases/download/v$(AVP_VERSION)/argocd-vault-plugin_$(AVP_VERSION)_linux_amd64 -o argocd-vault-plugin &amp;amp;&amp;amp;&lt;/span&gt;
          &lt;span class="s"&gt;chmod +x argocd-vault-plugin &amp;amp;&amp;amp;&lt;/span&gt;
          &lt;span class="s"&gt;mv argocd-vault-plugin /custom-tools/&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/custom-tools&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;

  &lt;span class="na"&gt;extraContainers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;avp-helm&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;/var/run/argocd/argocd-cmp-server&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/argoproj/argocd:v2.4.8&lt;/span&gt;
      &lt;span class="na"&gt;securityContext&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;runAsNonRoot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;runAsUser&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;999&lt;/span&gt;
      &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/run/argocd&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;var-files&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/home/argocd/cmp-server/plugins&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;plugins&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tmp-dir&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/home/argocd/cmp-server/config&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cmp-plugin&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;
          &lt;span class="na"&gt;subPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-vault-plugin&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/usr/local/bin/argocd-vault-plugin&lt;/span&gt;

  &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;configMap&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cmp-plugin&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cmp-plugin&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;custom-tools&lt;/span&gt;
      &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;tmp-dir&lt;/span&gt;
      &lt;span class="na"&gt;emptyDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;

&lt;span class="c1"&gt;# If you face issue with ArgoCD CRDs installation, then uncomment below section to disable it&lt;/span&gt;
&lt;span class="c1"&gt;#crds:&lt;/span&gt;
&lt;span class="c1"&gt;#  install: false&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Save that Helm values as &lt;code&gt;argocd-helm-values.yaml&lt;/code&gt; and execute below commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# once againe make sure to use proper namespace&lt;/span&gt;
kubens toolbox

&lt;span class="c"&gt;# install ArgoCD with provided vaules&lt;/span&gt;
helm repo add argo https://argoproj.github.io/argo-helm
helm &lt;span class="nb"&gt;install &lt;/span&gt;argocd argo/argo-cd &lt;span class="nt"&gt;-n&lt;/span&gt; toolbox &lt;span class="nt"&gt;-f&lt;/span&gt; argocd-helm-values.yaml


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;All of the above configurations you can find in dedicated &lt;a href="https://github.com/luafanti/arogcd-vault-plugin-with-helm" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If all went well, you should see similar list of Pods in &lt;code&gt;toolbox&lt;/code&gt; namespace. Note that &lt;code&gt;argocd-repo-server&lt;/code&gt; has sidecar container &lt;code&gt;avp-helm&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38e8mqc21tw4f64rcgvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38e8mqc21tw4f64rcgvu.png" alt="pod-list-in-lens"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install your resources with secrets injection
&lt;/h2&gt;

&lt;p&gt;Is the time for a final check of our setup and installation of our Helm Charts.&lt;/p&gt;

&lt;p&gt;Firstly, let’s try to authorize against ArgoCD. To obtain &lt;code&gt;admin&lt;/code&gt; user password execute the below command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; toolbox get secret argocd-initial-admin-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As with Vault, with ArgoCD we will also be working partly with the CLI and partly web UI. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# port forwarding in separate terminal window&lt;/span&gt;
kubectl port-forward svc/argocd-server 8080:80

&lt;span class="c"&gt;# authorize ArgoCD CLI&lt;/span&gt;
argocd login localhost:8080 &lt;span class="nt"&gt;--username&lt;/span&gt; admin &lt;span class="nt"&gt;--password&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get secret argocd-initial-admin-secret &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.data.password}"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As our demo Chart we will use my debug Spring Boot application from &lt;a href="https://github.com/luafanti/spring-boot-debug-app" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;. It’s simple web server that exposes a few debugging endpoints. Application has Helm templates and ArgoCD Application definition under &lt;code&gt;/infra&lt;/code&gt; directory. To deploy this stack to k8s with Argo we need to apply ArgoCD &lt;code&gt;Application&lt;/code&gt; CRD. Below full code sample, which you can also explore &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/main/infra/argocd/argocd-application-with-vault-secrets.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sandbox&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;infra/helm&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/luafanti/spring-boot-debug-app&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
    &lt;span class="na"&gt;plugin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HELM_VALUES&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;serviceAccount:&lt;/span&gt;
              &lt;span class="s"&gt;create: true&lt;/span&gt;
            &lt;span class="s"&gt;image:&lt;/span&gt;
              &lt;span class="s"&gt;repository: luafanti/spring-boot-debug-app&lt;/span&gt;
              &lt;span class="s"&gt;tag: main&lt;/span&gt;
              &lt;span class="s"&gt;pullPolicy: IfNotPresent&lt;/span&gt;
            &lt;span class="s"&gt;replicaCount: 1&lt;/span&gt;
            &lt;span class="s"&gt;resources:&lt;/span&gt;
              &lt;span class="s"&gt;memoryRequest: 256Mi&lt;/span&gt;
              &lt;span class="s"&gt;memoryLimit: 512Mi&lt;/span&gt;
              &lt;span class="s"&gt;cpuRequest: 500m&lt;/span&gt;
              &lt;span class="s"&gt;cpuLimit: 1 &lt;/span&gt;
            &lt;span class="s"&gt;probes:&lt;/span&gt;
              &lt;span class="s"&gt;liveness:&lt;/span&gt;
                &lt;span class="s"&gt;initialDelaySeconds: 15&lt;/span&gt;
                &lt;span class="s"&gt;path: /actuator/health/liveness&lt;/span&gt;
                &lt;span class="s"&gt;failureThreshold: 3&lt;/span&gt;
                &lt;span class="s"&gt;successThreshold: 1&lt;/span&gt;
                &lt;span class="s"&gt;timeoutSeconds: 3&lt;/span&gt;
                &lt;span class="s"&gt;periodSeconds: 5&lt;/span&gt;
              &lt;span class="s"&gt;readiness:&lt;/span&gt;
                &lt;span class="s"&gt;initialDelaySeconds: 15&lt;/span&gt;
                &lt;span class="s"&gt;path: /actuator/health/readiness&lt;/span&gt;
                &lt;span class="s"&gt;failureThreshold: 3&lt;/span&gt;
                &lt;span class="s"&gt;successThreshold: 1&lt;/span&gt;
                &lt;span class="s"&gt;timeoutSeconds: 3&lt;/span&gt;
                &lt;span class="s"&gt;periodSeconds: 5&lt;/span&gt;
            &lt;span class="s"&gt;ports:&lt;/span&gt;
              &lt;span class="s"&gt;http:&lt;/span&gt;
                &lt;span class="s"&gt;name: http&lt;/span&gt;
                &lt;span class="s"&gt;value: 8080&lt;/span&gt;
              &lt;span class="s"&gt;management:&lt;/span&gt;
                &lt;span class="s"&gt;name: management&lt;/span&gt;
                &lt;span class="s"&gt;value: 8081&lt;/span&gt;
            &lt;span class="s"&gt;envs:&lt;/span&gt;
              &lt;span class="s"&gt;- name: VAULT_SECRET_USER&lt;/span&gt;
                &lt;span class="s"&gt;value: &amp;lt;path:kv-v2/data/demo#user&amp;gt;&lt;/span&gt;
              &lt;span class="s"&gt;- name: VAULT_SECRET_PASSWORD&lt;/span&gt;
                &lt;span class="s"&gt;value: &amp;lt;path:kv-v2/data/demo#password&amp;gt;&lt;/span&gt;
            &lt;span class="s"&gt;log:&lt;/span&gt;
              &lt;span class="s"&gt;level:&lt;/span&gt;
                &lt;span class="s"&gt;spring: "info"&lt;/span&gt;
                &lt;span class="s"&gt;service: "info"&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can see in line &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/fa3f747b2e9f5cd47676d210fc2a79d01b74b2b5/infra/argocd/argocd-application-with-vault-secrets.yaml#L54" rel="noopener noreferrer"&gt;54&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/fa3f747b2e9f5cd47676d210fc2a79d01b74b2b5/infra/argocd/argocd-application-with-vault-secrets.yaml#L56" rel="noopener noreferrer"&gt;56&lt;/a&gt; placeholders with pattern &lt;code&gt;&amp;lt;path:vault_secret_path#secret_key&amp;gt;&lt;/code&gt; where Vault Plugin will inject the actual value from Vault secret. &lt;br&gt;
I also encourage you to compare this definition file with a definition without secret injection and without using Vault Plugin &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/main/infra/argocd/argocd-application.yaml" rel="noopener noreferrer"&gt;here&lt;/a&gt;. You should notice that &lt;code&gt;source&lt;/code&gt; property is different when we use secrets injection. When we want to leverage on Vault plugin we need to define our Argo &lt;code&gt;Application&lt;/code&gt; with source &lt;a href="https://github.com/luafanti/spring-boot-debug-app/blob/main/infra/argocd/argocd-application-with-vault-secrets.yaml#L14" rel="noopener noreferrer"&gt;plugin&lt;/a&gt; and pass Helm Values using env &lt;code&gt;HELM_VALUES&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Let’s install this Argo &lt;code&gt;Application&lt;/code&gt; and sync them.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# make sure you are in namespace where Argo has benn installed&lt;/span&gt;
kubens toolbox

&lt;span class="c"&gt;# once you download soruce from GIT repo&lt;/span&gt;
kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; infra/argocd/argocd-application-with-vault-secrets.yaml

&lt;span class="c"&gt;# List ArgoCD applications&lt;/span&gt;
argocd app list

&lt;span class="c"&gt;# Sync application&lt;/span&gt;
argocd app &lt;span class="nb"&gt;sync &lt;/span&gt;toolbox/demo


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once synchronization is finished, you should see beautiful green-full screen in ArgoCD UI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuannx74gs4iswwuc8ygs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuannx74gs4iswwuc8ygs.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Verify if injection works.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# use port other than 8080 as the tunnel to Argo already uses this port&lt;/span&gt;
kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; sandbox svc/demo-spring-debug-app 8090:8080

&lt;span class="c"&gt;# check injected envs 'VAULT_SECRET_PASSWORD' 'VAULT_SECRET_USER' in debug app &lt;/span&gt;
chrome http://localhost:8090/envs


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;One of the greatest things about the plugin is that if the value changes in Vault, ArgoCD will notice these changes and display &lt;code&gt;OutOfSync&lt;/code&gt; status. Let's prove it.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

&lt;span class="c"&gt;# update secrets in Vault&lt;/span&gt;
vault kv put kv-v2/demo &lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"secret_user_new"&lt;/span&gt; &lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"secret_password_new"&lt;/span&gt;

&lt;span class="c"&gt;# refresh application as well with target manifests cache&lt;/span&gt;
argocd app get toolbox/demo &lt;span class="nt"&gt;--hard-refresh&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After &lt;code&gt;Hard refresh&lt;/code&gt; you should see that your Argo Application back to status &lt;code&gt;OutOfSync&lt;/code&gt; what is expected during Vault secret update. Thanks to this mechanism, you don't have to worry about losing control of keeping your secrets up to date.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting &amp;amp; possible problems
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;make sure your Vault secrets don’t disappear from Vault. In this guide, we use Vault in &lt;code&gt;dev mode&lt;/code&gt; so secrets are stored in-memory. After cluster reboot all Vault objects will disappear.&lt;/li&gt;
&lt;li&gt;if you would like to use different namespace names for Vault/ArgoCD etc. make sure you adjust your configuration files properly, especially &lt;a href="https://github.com/luafanti/arogcd-vault-plugin-with-helm/blob/main/argocd-installation/argocd-vault-plugin-cmp.yaml#L32" rel="noopener noreferrer"&gt;HERE&lt;/a&gt; &amp;amp; &lt;a href="https://github.com/luafanti/arogcd-vault-plugin-with-helm/blob/840e8e96ec483739668237166b57cb83d73895dd/argocd-installation/argocd-vault-plugin-credentials.yaml#L10" rel="noopener noreferrer"&gt;HERE&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sometimes if you install ArgoCD multiple times in your cluster you can face error related to CRDs. You can uncomment &lt;a href="https://github.com/luafanti/arogcd-vault-plugin-with-helm/blob/840e8e96ec483739668237166b57cb83d73895dd/argocd-installation/argocd-helm-values.yaml#L57" rel="noopener noreferrer"&gt;this section&lt;/a&gt; to resolve it.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>vault</category>
      <category>kubernetes</category>
      <category>argocd</category>
      <category>helm</category>
    </item>
    <item>
      <title>Vault Auto-unseal using Transit Secret Engine on Kubernetes</title>
      <dc:creator>Artur Bartosik</dc:creator>
      <pubDate>Fri, 02 Dec 2022 08:35:56 +0000</pubDate>
      <link>https://dev.to/luafanti/vault-auto-unseal-using-transit-secret-engine-on-kubernetes-13k8</link>
      <guid>https://dev.to/luafanti/vault-auto-unseal-using-transit-secret-engine-on-kubernetes-13k8</guid>
      <description>&lt;h2&gt;
  
  
  Theoretical introduction
&lt;/h2&gt;

&lt;p&gt;To make the Vault operational once it has been installed, we need to perform two actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intialzie Vault&lt;/li&gt;
&lt;li&gt;Unseal Vault&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unsealing has to happen every time Vault starts. This is because Vault starts with sealed state in which it can't read storage because it doesn't know how to decrypt it.&lt;/p&gt;

&lt;p&gt;Initialzing happens once when the server started with new backend. During initialization Vault generates bunch of keys:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unseal keys&lt;/li&gt;
&lt;li&gt;encryption keys&lt;/li&gt;
&lt;li&gt;root token&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you can easily guess, &lt;code&gt;unseal keys&lt;/code&gt; generated during initizlization are used for unsealing the Vault.&lt;/p&gt;

&lt;p&gt;We can distinguish three options for unsealing the Vault:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual unsealing&lt;/li&gt;
&lt;li&gt;Auto-unseal&lt;/li&gt;
&lt;li&gt;Transit Unseal - de facto one of Auto-unseal option&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Manual unsealing
&lt;/h3&gt;

&lt;p&gt;Manual unsealing is the simplest and doesn't require any additional configuration. Vault generated root key (don’t be confused with root token) and uses an algorithm  &lt;a href="https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing" rel="noopener noreferrer"&gt;Shamir's Secret Sharing&lt;/a&gt; to split the key into chunks. During initialization, we can determine how many key shares will be needed to unseal the Vault. This is cloud agnostic and very flexible option but can become painful when you have many Vault clusters, many keys, and many key holders. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusm8gaqrjxian15r5o30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusm8gaqrjxian15r5o30.png" alt="Vault Manual unsealing diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Auto-unseal
&lt;/h3&gt;

&lt;p&gt;Auto-unseal reduces operational complexity and makes management less painful. In this approach, we delegate the responsibility of securing the unseal key from users to a trusted device or service. Vault with Auto-unseal, takes care of unsealing itself so we no longer need to worry about this as long as the service we have configured is available. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2zucllabny3mdkdqkgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm2zucllabny3mdkdqkgo.png" alt="Vault Auto-unseal diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Services and devices supported with Auto-unseal you can find in &lt;a href="https://developer.hashicorp.com/vault/docs/configuration/seal" rel="noopener noreferrer"&gt;official docs&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transit Auto-unseal
&lt;/h3&gt;

&lt;p&gt;We said that is one of Auto-unseal option. What makes this Auto-unseal different is that we don’t rely on an external service, but on external Vault itself. In this way, we can place one central Vault that will be responsible for Auto-unsealing other Vault instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtk3v6hapyex1ccy51qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtk3v6hapyex1ccy51qp.png" alt="Vault Transit Auto-unseal diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Transit Auto-unseal setup
&lt;/h2&gt;

&lt;p&gt;We are going to isolate our Vaults on the level of namespaces, so let's start with their creation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# namespace for Vault central&lt;/span&gt;
kubectl create ns vault

&lt;span class="c"&gt;# namespace for Vault with Transit Auto-unseal&lt;/span&gt;
kubectl create ns vault-a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s start with installation of Vault Central. Save below Helm chart values to install Vault in HA mode with default manual unsealing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;affinity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;ha&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
    &lt;span class="na"&gt;raft&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# change namespace to Vault central&lt;/span&gt;
kns vault

helm repo add hashicorp https://helm.releases.hashicorp.com
helm &lt;span class="nb"&gt;install &lt;/span&gt;vault hashicorp/vault &lt;span class="nt"&gt;-f&lt;/span&gt; vault-central-helm-values.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the below initialization we split the root key into 4 shares (unseal keys). We can also set how many keys are required to reconstruct the root key, which is then used to decrypt the Vault's encryption key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator init &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-key-shares&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-key-threshold&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; vault-central-keys.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once is applied Vault generates unseal keys encrypted with base64 and hex and root token responsible for authentication against Vault. We will use it later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unseal_keys_b64"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4Wm5BYsNal+zMbsb3ewNbi6zLtKIOXz3L+NFX7jw0/3T"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;miasg31FmPJqx9LrnPaVEuG639fvjAqZF3gp4ZlKw+wK"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;EyVw9nQH/T+3zsa4HbPJ2s15l6B5MizMKQlKqs9taFzX"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zc7eU9MEvy9AaV4FPSQe7Jla2LcqSjS8KNPFDlQs0Rcg"&lt;/span&gt;
  &lt;span class="pi"&gt;],&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unseal_keys_hex"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;e169b9058b0d6a5fb331bb1bddec0d6e2eb32ed288397cf72fe3455fb8f0d3fdd3"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;9a26ac837d4598f26ac7d2eb9cf69512e1badfd7ef8c0a99177829e1994ac3ec0a"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;132570f67407fd3fb7cec6b81db3c9dacd7997a079322ccc29094aaacf6d685cd7"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cdcede53d304bf2f40695e053d241eec995ad8b72a4a34bc28d3c50e542cd11720"&lt;/span&gt;
  &lt;span class="pi"&gt;],&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unseal_shares"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unseal_threshold"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recovery_keys_b64"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[],&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recovery_keys_hex"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[],&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recovery_keys_shares"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recovery_keys_threshold"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt;
  &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;root_token"&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hvs.NbXRWfYNI4PmA860aBlC4onU"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s unseal the first of two Vault instances in the central cluster. Pass two of four &lt;code&gt;unseal_keys_b64&lt;/code&gt;  to the Vault to unseal them according to &lt;code&gt;key-threshold&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator unseal 4Wm5BYsNal+zMbsb3ewNbi6zLtKIOXz3L+NFX7jw0/3T
kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator unseal miasg31FmPJqx9LrnPaVEuG639fvjAqZF3gp4ZlKw+wK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This same we have to do with the second instance, but before that, we must connect the second Vault to Raft storage cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-ti&lt;/span&gt; vault-1 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator raft &lt;span class="nb"&gt;join &lt;/span&gt;http://vault-0.vault-internal:8200

kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-1 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator unseal 4Wm5BYsNal+zMbsb3ewNbi6zLtKIOXz3L+NFX7jw0/3T
kubectl &lt;span class="nb"&gt;exec &lt;/span&gt;vault-1 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator unseal miasg31FmPJqx9LrnPaVEuG639fvjAqZF3gp4ZlKw+wK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Time to create &lt;a href="https://developer.hashicorp.com/vault/docs/secrets/transit" rel="noopener noreferrer"&gt;Transit Secret Engine&lt;/a&gt;. This component will generate root key that we will use to Auto-unseal other Vaults.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# separate window&lt;/span&gt;
kubectl port-forward vault-0 &lt;span class="nt"&gt;-n&lt;/span&gt; vault 8200:8200

&lt;span class="c"&gt;# set Vault address to use locally Vault CLI&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;VAULT_ADDR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://127.0.0.1:8200

&lt;span class="c"&gt;# use 'root_token' generated during Vault initialization&lt;/span&gt;
vault login

&lt;span class="c"&gt;# create transit secret &lt;/span&gt;
vault secrets &lt;span class="nb"&gt;enable &lt;/span&gt;transit
vault write &lt;span class="nt"&gt;-f&lt;/span&gt; transit/keys/autounseal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save below Vault policy that we will attach to Auto-unseal token.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;path "transit/encrypt/autounseal" {&lt;/span&gt;
   &lt;span class="s"&gt;capabilities = [ "update" ]&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;path "transit/decrypt/autounseal" {&lt;/span&gt;
   &lt;span class="s"&gt;capabilities = [ "update" ]&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# create policy with the above definition&lt;/span&gt;
vault policy write autounseal autounseal-policy.hcl

&lt;span class="c"&gt;# create token for Auto-unsealing&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;vault token create &lt;span class="nt"&gt;-orphan&lt;/span&gt; &lt;span class="nt"&gt;-policy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;autounseal &lt;span class="nt"&gt;-period&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;24h

Key                  Value
&lt;span class="nt"&gt;---&lt;/span&gt;                  &lt;span class="nt"&gt;-----&lt;/span&gt;
token                hvs.CAESIP_A7TaC9kt4yUeqg5_bJNiOJElb4UbA01xoV9Rk4ei6Gh4KHGh2cy5zVXpaa3A1MG9uOEZrNXN2a3J0TGl0cHU
token_accessor       wkTM4nsF0ehkRvIuBD9cedHC
token_duration       24h
token_renewable      &lt;span class="nb"&gt;true
&lt;/span&gt;token_policies       &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"autounseal"&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
identity_policies    &lt;span class="o"&gt;[]&lt;/span&gt;
policies             &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"autounseal"&lt;/span&gt; &lt;span class="s2"&gt;"default"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, we have created periodic orphan token which we will use for Auto-unsealing. Orphan means that created token doesn’t have a parent token so can’t be revoked together with ancestor. What is important to note, transit Auto-unseal token is renewed automatically by default. &lt;/p&gt;

&lt;p&gt;Now it’s time to prepare Helm chart with the second Vault installation. It will be Vault with transit Auto-unsealing configuration. Check below Helm values file. Provide Vault central address and of course generated in previous step token (root key).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;standalone&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;disable_mlock = true&lt;/span&gt;
      &lt;span class="s"&gt;ui=true&lt;/span&gt;

      &lt;span class="s"&gt;storage "file" {&lt;/span&gt;
        &lt;span class="s"&gt;path = "/vault/data"&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;

      &lt;span class="s"&gt;listener "tcp" {&lt;/span&gt;
        &lt;span class="s"&gt;address     = "127.0.0.1:8200"&lt;/span&gt;
        &lt;span class="s"&gt;tls_disable = "true"&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;

      &lt;span class="s"&gt;seal "transit" {&lt;/span&gt;
        &lt;span class="s"&gt;address = "http://vault.vault:8200"&lt;/span&gt;
        &lt;span class="s"&gt;token = "hvs.CAESIP_A7TaC9kt4yUeqg5_bJNiOJElb4UbA01xoV9Rk4ei6Gh4KHGh2cy5zVXpaa3A1MG9uOEZrNXN2a3J0TGl0cHU"&lt;/span&gt;
        &lt;span class="s"&gt;disable_renewal = "false"&lt;/span&gt;
        &lt;span class="s"&gt;key_name = "autounseal"&lt;/span&gt;
        &lt;span class="s"&gt;mount_path = "transit/"&lt;/span&gt;
        &lt;span class="s"&gt;tls_skip_verify = "true"&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# change namespace to Vault Auto-unseal&lt;/span&gt;
kns vault-a

helm &lt;span class="nb"&gt;install &lt;/span&gt;vault hashicorp/vault &lt;span class="nt"&gt;-f&lt;/span&gt; vault-auto-unseal-helm-values.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last step is to initialize Vault.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; vault-0 &lt;span class="nt"&gt;--&lt;/span&gt; vault operator init

Recovery Key 1: FFMLznSZq9wh/0CJwKLJWKkI9BrK/hjF6ySDYl9a19Ie
Recovery Key 2: qRfrdpkuEcXsF+dFh1Geru8VHkiL/hWUW+vY25twlwT1
Recovery Key 3: dX8sed7Dv8kI8kfFuYWDeQlagoikEVBpV5lZqH4ORnEh
Recovery Key 4: TCCplv+KvZHEOlICQU6eb67hGccufiqcZGkSiGlpQPkx
Recovery Key 5: ictL+c9czgMO+ME8qoTcGpgsvymEcORN7MkrpDE28x4a

Initial Root Token: hvs.6umGyyta9xrjq0q7Cv09Hr8X

Success! Vault is initialized
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;… and check its status to verify &lt;code&gt;Sealed&lt;/code&gt; status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; vault-a &lt;span class="nt"&gt;--&lt;/span&gt; vault status

Key                      Value
&lt;span class="nt"&gt;---&lt;/span&gt;                      &lt;span class="nt"&gt;-----&lt;/span&gt;
Recovery Seal Type       shamir
Initialized              &lt;span class="nb"&gt;true
&lt;/span&gt;Sealed                   &lt;span class="nb"&gt;false
&lt;/span&gt;Total Recovery Shares    5
Threshold                3
Version                  1.12.0
Build Date               2022-10-10T18:14:33Z
Storage Type             file
Cluster Name             vault-cluster-7a11a0ae
Cluster ID               a883d977-e70a-6367-3148-9c7a2c246897
HA Enabled               &lt;span class="nb"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each Vault initialization with configured to Auto-Unseal generates &lt;code&gt;Recovery keys&lt;/code&gt; instead of &lt;code&gt;Unseal Keys&lt;/code&gt;. Recovery keys can’t be used for unsealing Vault. These keys perform only authorization functions, which allows, for example, generates a new root token.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;In the DevOps world, we want to automate everything possible and reduce operational complexity anywhere possible. Undoubtedly, Auto-unseal is something that fits into this assumption. However, from a security point of view, it is sometimes good to introduce a manual step with human intervention. In the above sample of Transit Auto-unseal we have exact combination of both approaches - Unsealed manually central cluster and related clusters Auto-unsealed by it. &lt;br&gt;
It is also worth noting, that our solution is cloud-agnostic. We don’t rely on any external service, so we can setup it on-premise. The downside here is the introduction of a very crucial component in the overall deployment - central Vault cluster. Definitely, we have to think, how to ensure high availability and fault tolerance here.&lt;/p&gt;

</description>
      <category>vault</category>
      <category>kubernetes</category>
      <category>helm</category>
    </item>
  </channel>
</rss>
