<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Eleni Grosdouli</title>
    <description>The latest articles on DEV Community by Eleni Grosdouli (@egrosdou).</description>
    <link>https://dev.to/egrosdou</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/egrosdou"/>
    <language>en</language>
    <item>
      <title>5-Step Approach: ProjectSveltos Event Framework for Kubernetes Deployment with Cilium Gateway API</title>
      <dc:creator>Eleni Grosdouli</dc:creator>
      <pubDate>Mon, 19 Feb 2024 15:39:44 +0000</pubDate>
      <link>https://dev.to/egrosdou/5-step-approach-projectsveltos-event-framework-for-kubernetes-deployment-with-cilium-gateway-api-3fi1</link>
      <guid>https://dev.to/egrosdou/5-step-approach-projectsveltos-event-framework-for-kubernetes-deployment-with-cilium-gateway-api-3fi1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the &lt;a href="https://medium.com/@eleni.grosdouli/argocd-deployment-on-rke2-with-cilium-gateway-api-ab1769cc28a3" rel="noopener noreferrer"&gt;previous post&lt;/a&gt;, we demonstrated how easy it is to deploy ArgoCD with the &lt;a href="https://docs.cilium.io/en/v1.14/network/servicemesh/gateway-api/gateway-api/" rel="noopener noreferrer"&gt;Cilium Gateway API&lt;/a&gt; and move away from the Kubernetes Ingress. In today’s post, we will explore how the &lt;a href="https://github.com/projectsveltos" rel="noopener noreferrer"&gt;Projectsveltos&lt;/a&gt; &lt;a href="https://projectsveltos.github.io/sveltos/events/addon_event_deployment/" rel="noopener noreferrer"&gt;Event Framework&lt;/a&gt; can be utilised to seamlessly perform the same deployment based on an event.&lt;/p&gt;

&lt;p&gt;The Sveltos Event Framework was designed to automate the deployment of Kubernetes add-ons triggered on specific cluster events. This functionality proves invaluable in managing environments spanning across multiple clusters.&lt;/p&gt;

&lt;p&gt;Sveltos supports an event-driven workflow with the below steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Define what the event is&lt;/li&gt;
&lt;li&gt;Select the clusters to watch for such events&lt;/li&gt;
&lt;li&gt;Define an event trigger. Which add-ons/applications to deploy when the events occur&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once we identify the event we want Sveltos to watch for, we can deploy the &lt;strong&gt;EventSource&lt;/strong&gt; and &lt;strong&gt;EventTrigger&lt;/strong&gt; Kubernetes CRDs (Custom Resource Definitions). More information about these resources can be found later on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkyh5wt3ufk8fxemf3m2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkyh5wt3ufk8fxemf3m2.png" alt="Sveltos Event Framework&amp;lt;br&amp;gt;
" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along, it is strongly advised to take a look at a previous &lt;a href="https://medium.com/@eleni.grosdouli/5-step-approach-projectsveltos-for-kubernetes-add-on-deployment-and-management-on-rke2-be3ba7acb24f" rel="noopener noreferrer"&gt;post&lt;/a&gt; detailing the deployment of Sveltos on a cluster. To install Sveltos, just follow the instructions outlined in steps 1 and 2 of the mentioned post. Additionally, as we will use the Cilium Gateway API, have a look at the previous &lt;a href="https://medium.com/@eleni.grosdouli/argocd-deployment-on-rke2-with-cilium-gateway-api-ab1769cc28a3" rel="noopener noreferrer"&gt;post&lt;/a&gt; to enable this capability on an RKE2 cluster.&lt;/p&gt;
&lt;h2&gt;
  
  
  Lab Setup
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;— — — — — -+ — — — — — — — — — — + — — — — — — — — — — — — -+
| Cluster Name | Type | Version |
+ — — — — — — -+ — — — — — — — — — — + — — — — — — — — — — -+
| rke2-mgmt01 | Management Cluster | RKE2 v1.26.12+rke2r1 |
| rke2-sles-demo | Managed Cluster | RKE2 v1.26.12+rke2r1|
+ — — — — — -+ — — — — — — — — — — + — — — — — — — — — — — — -+

- — — — — — -+ — — — — -+
| Deployment | Version |
+ — — — — — — -+ — — — — -+
| ArgoCD | v2.9.3 |
| Cilium | v1.14.5 |
| GatewayAPI | v0.7.0 |
+ — — — — — — -+ — — — — -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Step 1: Register and label the RKE2 cluster with Sveltos
&lt;/h2&gt;

&lt;p&gt;Once the RKE2 cluster is in a “Running” state, we will use the &lt;a href="https://github.com/projectsveltos/sveltosctl" rel="noopener noreferrer"&gt;sveltosctl&lt;/a&gt; to register it. For the registration, we need three things: a service account, a kubeconfig associated with that account and a namespace. If you are unsure how to create a Service Account and an associated kubeconfig, there is a &lt;a href="https://raw.githubusercontent.com/gianlucam76/scripts/master/get-kubeconfig.sh" rel="noopener noreferrer"&gt;script&lt;/a&gt; publicly available to help you out.&lt;/p&gt;
&lt;h3&gt;
  
  
  Registration
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl register cluster &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;projectsveltos &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rke2-sles-demo &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rke2-sles-demo.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltoscluster &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos
NAME               READY   VERSION
rke2-sles-demo   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.12+rke2r1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Assing Label
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label sveltoscluster rke2-sles-demo &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;staging &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltoscluster &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME             READY   VERSION           LABELS
rke2-sles-demo   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.12+rke2r1   &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;staging,sveltos-agent&lt;span class="o"&gt;=&lt;/span&gt;present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For the demonstration, we will use the label ‘env=staging’ to specify the cluster we want to deploy the Gateway and the HTTPRoute resources.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Pre-Work (management cluster)
&lt;/h2&gt;

&lt;p&gt;Before we move on with the Gateway API implementation, we need to create additional Kubernetes resources for the managed cluster (rke2-sles-demo).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ArgoCD namespace&lt;/li&gt;
&lt;li&gt;ArgoCD TLS Secret&lt;/li&gt;
&lt;li&gt;Cilium IP Pool&lt;/li&gt;
&lt;li&gt;Cilium GatewayClass&lt;/li&gt;
&lt;li&gt;Gateway&lt;/li&gt;
&lt;li&gt;HTTPRoute&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To utilise the power of Sveltos, we will create the above resources in the management cluster as a Secret and configMap resource and use the Sveltos ClusterProfile to deploy them to the managed cluster.&lt;/p&gt;

&lt;p&gt;More information on how to create the above resources can be found &lt;a href="https://projectsveltos.github.io/sveltos/addons/raw_yaml/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  ArgoCD Namespace
&lt;/h3&gt;

&lt;p&gt;If the namespace does not exist in the managed cluster, Sveltos will create it!&lt;/p&gt;
&lt;h3&gt;
  
  
  ArgoCD TLS Secret
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tls.crt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;base64 encoded .crt&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
  &lt;span class="na"&gt;tls.key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;base64 encoded .key&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server-tls&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes.io/tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic argocd-server-tls &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;argocd_server_tls.yaml &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;addons.projectsveltos.io/cluster-profile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Cilium IP Pool
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cilium.io/v2alpha1"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiliumLoadBalancerIPPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rke2-pool"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cidrs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.10.10.0/24"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap ipampool &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ipam-pool.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Cilium GatewayClass
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GatewayClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;controllerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;io.cilium/gateway-controller&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap gatewayclass &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;gatewayclass.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Gateway
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
  &lt;span class="na"&gt;listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd.example.com&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-example-com-http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd.example.com&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-example-com-https&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPS&lt;/span&gt;
    &lt;span class="na"&gt;tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;certificateRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server-tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap gateway &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;gateway.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  HTTPRoute
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPRoute&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;hostnames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;argocd.example.com&lt;/span&gt;
  &lt;span class="na"&gt;parentRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;backendRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;matches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PathPrefix&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;parents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap gateway &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;httproute.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret
NAME                TYPE                                       DATA   AGE
argocd-server-tls   addons.projectsveltos.io/cluster-profile   1      20m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cm
NAME               DATA   AGE
gateway            1      35s
gatewayclass       1      20m
httproute          1      3s
ipam-pool          1      22m
kube-root-ca.crt   1      141m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  ClusterProfile
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.projectsveltos.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterProfile&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium-gateway-api&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=staging&lt;/span&gt;
 &lt;span class="na"&gt;syncMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Continuous&lt;/span&gt;
 &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ipampool&lt;/span&gt;
   &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gatewayclass&lt;/span&gt;
   &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server-tls&lt;/span&gt;
   &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The ClusterProfile definition will match the managed cluster with the label selector set to ‘env=staging’ and deploy the Cilium IP Pool, the GatewayClass and the ArgoCD TLS secret.&lt;/p&gt;
&lt;h3&gt;
  
  
  Apply ClusterProfile
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"clusterprofile.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show addons

+-------------------------------+----------------------------------------+-----------+-------------------+---------+-------------------------------+---------------------------------------------+
|            CLUSTER            |             RESOURCE TYPE              | NAMESPACE |       NAME        | VERSION |             TIME              |                  PROFILES                   |
+-------------------------------+----------------------------------------+-----------+-------------------+---------+-------------------------------+---------------------------------------------+
| projectsveltos/rke2-sles-demo | cilium.io:CiliumLoadBalancerIPPool     |           | rke2-pool         | N/A     | 2024-02-11 17:39:58 +0100 CET | ClusterProfile/cilium-gateway-api           |
| projectsveltos/rke2-sles-demo | gateway.networking.k8s.io:GatewayClass |           | cilium            | N/A     | 2024-02-11 17:39:58 +0100 CET | ClusterProfile/cilium-gateway-api           |
| projectsveltos/rke2-sles-demo | :Secret                                | argocd    | argocd-server-tls | N/A     | 2024-02-11 17:39:58 +0100 CET | ClusterProfile/cilium-gateway-api           |
+-------------------------------+----------------------------------------+-----------+-------------------+---------+-------------------------------+---------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Step 3: Define the Event Framework (management cluster)
&lt;/h2&gt;

&lt;p&gt;As mentioned at the beginning of the post, we will use the &lt;strong&gt;EventSource&lt;/strong&gt; to define which events Sveltos will watch for and the &lt;strong&gt;EventTrigger&lt;/strong&gt; resource to perform actions based on an event.&lt;/p&gt;
&lt;h3&gt;
  
  
  EventSource
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lib.projectsveltos.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EventSource&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-https-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;collectResources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
 &lt;span class="na"&gt;resourceSelectors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
   &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v1"&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Service"&lt;/span&gt;
   &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
   &lt;span class="na"&gt;evaluate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
     &lt;span class="s"&gt;function evaluate()&lt;/span&gt;
       &lt;span class="s"&gt;hs = {}&lt;/span&gt;
       &lt;span class="s"&gt;hs.matching = false&lt;/span&gt;
       &lt;span class="s"&gt;if obj.spec.ports ~= nil then&lt;/span&gt;
         &lt;span class="s"&gt;for _,p in pairs(obj.spec.ports) do&lt;/span&gt;
           &lt;span class="s"&gt;if p.port == 80 or p.port == 443 then&lt;/span&gt;
             &lt;span class="s"&gt;hs.matching = true&lt;/span&gt;
           &lt;span class="s"&gt;end&lt;/span&gt;
         &lt;span class="s"&gt;end&lt;/span&gt;
       &lt;span class="s"&gt;end&lt;/span&gt;
       &lt;span class="s"&gt;return hs&lt;/span&gt;
     &lt;span class="s"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The EventSource uses the &lt;a href="https://www.lua.org/" rel="noopener noreferrer"&gt;Lua&lt;/a&gt; language to search for any services with ports set to 80 or 443 in the ‘argocd’ namespace. More examples can be found &lt;a href="https://projectsveltos.github.io/sveltos/events/addon_event_deployment/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  EventTrigger
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;lib.projectsveltos.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EventTrigger&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service-network-policy&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;sourceClusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=staging&lt;/span&gt;
 &lt;span class="na"&gt;eventSourceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http-https-service&lt;/span&gt;
 &lt;span class="na"&gt;oneForEvent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
 &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;httproute&lt;/span&gt;
   &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
 &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway&lt;/span&gt;  
   &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
   &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Apply Resources
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"eventsource.yaml"&lt;/span&gt;
eventsource.lib.projectsveltos.io/loadbalancer-service created

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"eventtrigger.yaml"&lt;/span&gt;
eventtrigger.lib.projectsveltos.io/service-network-policy created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once an event is spotted by Sveltos, the HTTPRoute and Gateway Kubernetes add-ons will get deployed on the managed cluster with the label selector set to ‘env=staging’. In our case, the cluster ‘rke2-sles-demo’ will match the ClusterSelector and will automatically get the specified resources.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Deploy ArgoCD Manifests (managed cluster)
&lt;/h2&gt;

&lt;p&gt;Now, it is time to deploy the ArgoCD manifests and wait Sveltos to &lt;strong&gt;notice the event&lt;/strong&gt; and &lt;strong&gt;trigger&lt;/strong&gt; the &lt;strong&gt;deployment&lt;/strong&gt; of the HTTPRoute and Gateway resources in the ‘argocd’ namespace.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Verification and Tests
&lt;/h2&gt;

&lt;p&gt;As we have installed the ArgoCD manifests, the ‘argocd-server’ service should have been created in the ‘argocd’ namespace. That means the Sveltos EventSource should have been activated and triggered the deployment of the HTTPRoute and Gateway Kubernetes resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sveltos Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show addons
+-------------------------------+----------------------------------------+-----------+-------------------+---------+-------------------------------+---------------------------------------------+
|            CLUSTER            |             RESOURCE TYPE              | NAMESPACE |       NAME        | VERSION |             TIME              |                  PROFILES                   |
+-------------------------------+----------------------------------------+-----------+-------------------+---------+-------------------------------+---------------------------------------------+
| projectsveltos/rke2-sles-demo | cilium.io:CiliumLoadBalancerIPPool     |           | rke2-pool         | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/cilium-gateway-api           |
| projectsveltos/rke2-sles-demo | gateway.networking.k8s.io:GatewayClass |           | cilium            | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/cilium-gateway-api           |
| projectsveltos/rke2-sles-demo | :Secret                                | argocd    | argocd-server-tls | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/cilium-gateway-api           |
| projectsveltos/rke2-sles-demo | gateway.networking.k8s.io:HTTPRoute    | argocd    | argocd            | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/sveltos-1lbzxnp4cs96gpo38g8b |
| projectsveltos/rke2-sles-demo | gateway.networking.k8s.io:Gateway      | argocd    | argocd            | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/sveltos-1lbzxnp4cs96gpo38g8b |
| projectsveltos/rke2-sles-demo | gateway.networking.k8s.io:HTTPRoute    | argocd    | argocd            | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/sveltos-i0nmkq3ebsvpis8ubd3s |
| projectsveltos/rke2-sles-demo | gateway.networking.k8s.io:Gateway      | argocd    | argocd            | N/A     | 2024-02-11 19:55:52 +0100 CET | ClusterProfile/sveltos-i0nmkq3ebsvpis8ubd3s |
+-------------------------------+----------------------------------------+-----------+-------------------+---------+-------------------------------+---------------------------------------------+

&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show usage
+----------------+--------------------+------------------------------+-------------------------------+
| RESOURCE KIND  | RESOURCE NAMESPACE |        RESOURCE NAME         |           CLUSTERS            |
+----------------+--------------------+------------------------------+-------------------------------+
| ClusterProfile |                    | cilium-gateway-api           | projectsveltos/rke2-sles-demo |
| ClusterProfile |                    | sveltos-fvc3vr84yqusfzwpobcl | projectsveltos/rke2-sles-demo |
| ConfigMap      | default            | gateway                      | projectsveltos/rke2-sles-demo |
| ConfigMap      | default            | ipampool                     | projectsveltos/rke2-sles-demo |
| ConfigMap      | default            | gatewayclass                 | projectsveltos/rke2-sles-demo |
| ConfigMap      | default            | httproute                    | projectsveltos/rke2-sles-demo |
+----------------+--------------------+------------------------------+-------------------------------+

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Managed Cluster Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get gateway,httproute &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                                       CLASS    ADDRESS        PROGRAMMED   AGE
gateway.gateway.networking.k8s.io/argocd   cilium   10.10.10.147   True         78s

NAME                                         HOSTNAMES                AGE
httproute.gateway.networking.k8s.io/argocd   &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"argocd.example.com"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;   78s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test ArgoCD Accessibility
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-ki&lt;/span&gt; https://argocd.example.com
HTTP/1.1 200 OK
accept-ranges: bytes
content-length: 788
content-security-policy: frame-ancestors &lt;span class="s1"&gt;'self'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
content-type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
vary: Accept-Encoding
x-frame-options: sameorigin
x-xss-protection: 1
&lt;span class="nb"&gt;date&lt;/span&gt;: Sun, 11 Feb 2024 18:59:57 GMT
x-envoy-upstream-service-time: 0
server: envoy

&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;doctype html&amp;gt;&amp;lt;html &lt;span class="nv"&gt;lang&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;meta &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UTF-8"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;title&amp;gt;Argo CD&amp;lt;/title&amp;gt;&amp;lt;base &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;meta &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"viewport"&lt;/span&gt; &lt;span class="nv"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"width=device-width,initial-scale=1"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"icon"&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"image/png"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"assets/favicon/favicon-32x32.png"&lt;/span&gt; &lt;span class="nv"&gt;sizes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"32x32"&lt;/span&gt;/&amp;gt;&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"icon"&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"image/png"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"assets/favicon/favicon-16x16.png"&lt;/span&gt; &lt;span class="nv"&gt;sizes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"16x16"&lt;/span&gt;/&amp;gt;&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"assets/fonts.css"&lt;/span&gt; &lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;script &lt;span class="nv"&gt;defer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defer"&lt;/span&gt; &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main.9ecae91d8fd1deedf944.js"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/script&amp;gt;&amp;lt;/head&amp;gt;&amp;lt;body&amp;gt;&amp;lt;noscript&amp;gt;&amp;lt;p&amp;gt;Your browser does not support JavaScript. Please &lt;span class="nb"&gt;enable &lt;/span&gt;JavaScript to view the site. Alternatively, Argo CD can be used with the &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://argoproj.github.io/argo-cd/cli_installation/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Argo CD CLI&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/noscript&amp;gt;&amp;lt;div &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"app"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/div&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;script &lt;span class="nv"&gt;defer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defer"&lt;/span&gt; &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"extensions.js"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/script&amp;gt;&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As we use a self-sign certificate the &lt;code&gt;argocd-cmd-params-cm&lt;/code&gt; ConfigMap in the argocd namespace should include the following setting &lt;code&gt;server.insecure: “true”&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;As expected, Sveltos took care of the complete lifecycle of the different Kubernetes deployments in a simple and straightforward manner! The example might seem simplistic but think about it on a bigger scale with hundreds of clusters across a multicloud setup. Imagine the flexibility of setting Sveltos labels to a cluster, defining the event source to trigger an action and having a Kubernetes deployment on a cluster. In the next post, we will demonstrate the power of Events in a cross multi-cluster environment.&lt;/p&gt;

&lt;p&gt;👏 Support this project&lt;br&gt;
Every contribution counts! If you enjoyed this article, check out the Projectsveltos &lt;a href="https://dev.toProjectsveltos%20GitHub%20repo"&gt;GitHub repo&lt;/a&gt;. You can &lt;a href="https://github.com/projectsveltos/addon-controller" rel="noopener noreferrer"&gt;star 🌟 the project&lt;/a&gt; if you find it helpful.&lt;/p&gt;

&lt;p&gt;The GitHub repo is a great resource for getting started with the project. It contains the code, documentation, and many more examples.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>tutorial</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>ArgoCD Deployment on RKE2 with Cilium Gateway API</title>
      <dc:creator>Eleni Grosdouli</dc:creator>
      <pubDate>Mon, 19 Feb 2024 15:24:34 +0000</pubDate>
      <link>https://dev.to/egrosdou/argocd-deployment-on-rke2-with-cilium-gateway-api-412n</link>
      <guid>https://dev.to/egrosdou/argocd-deployment-on-rke2-with-cilium-gateway-api-412n</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;It has already been a couple of years since the Kubernetes &lt;a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noopener noreferrer"&gt;Ingress&lt;/a&gt; was defined as a “frozen” feature while further development will be added to the &lt;a href="https://gateway-api.sigs.k8s.io/" rel="noopener noreferrer"&gt;Gateway API&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After initial exposure to the Cilium Gateway API &lt;a href="https://docs.cilium.io/en/v1.14/network/servicemesh/gateway-api/gateway-api/" rel="noopener noreferrer"&gt;docs&lt;/a&gt; and the interactive &lt;a href="https://isovalent.com/labs/gateway-api/" rel="noopener noreferrer"&gt;lab&lt;/a&gt; session, it sounded promising to move the ArgoCD deployment from the Kubernetes Ingress to the Cilium Gateway API. The purpose of the blog post is to illustrate how easy it is to move the ArgoCD installation to the Cilium Gateway API. For this demonstration, the Gateway and the HTTPRoute have been created in the argocd namespace.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iok9h56kedbdlzhcn5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iok9h56kedbdlzhcn5a.png" alt="Cilium Gateway API" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- — — — — — — + — — — — — — — — — -+ — — — — — — — — —+
| Cluster Name | Type | Version |
+ — — — — — — — + — — — — — — — — — -+ — — — — — — — — +
| rke2-test01| Test Management Cluster| RKE2 v1.26.12+rke2r1 |
+ — — — — — — — + — — — — — — — — — -+ — — — — — — — — +

- — — — — — -+ — — — — -+
| Deployment | Version |
+ — — — — — — -+ — — — — -+
| ArgoCD | v2.9.3 |
| Cilium | v1.14.5 |
| GatewayAPI | v0.7.0 |
+ — — — — — — -+ — — — — -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Deploy RKE2 Cluster with Cilium CNI
&lt;/h2&gt;

&lt;p&gt;Before diving in, it is a good idea to checkout the RKE2 official &lt;a href="https://docs.rke2.io/install/network_options" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; on Kubernetes Networking and the Cilium &lt;a href="https://docs.cilium.io/en/v1.14/installation/k8s-install-rke/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Also, take a peek at the prerequisites for deploying the &lt;a href="https://docs.cilium.io/en/v1.14/network/servicemesh/gateway-api/gateway-api/" rel="noopener noreferrer"&gt;Gateway API deployment&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  RKE2 Pre-work
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat /etc/rancher/rke2/config.yaml&lt;/span&gt;
&lt;span class="na"&gt;write-kubeconfig-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0644&lt;/span&gt;
&lt;span class="na"&gt;tls-san&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;Master Node hostname&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="nv"&gt;Your Token&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;cni&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
&lt;span class="na"&gt;disable-kube-proxy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="na"&gt;etcd-expose-metrics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;helm.cattle.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HelmChartConfig&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rke2-cilium&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kube-system&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;valuesContent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;image:&lt;/span&gt;
      &lt;span class="s"&gt;tag: v1.14.5&lt;/span&gt;
    &lt;span class="s"&gt;kubeProxyReplacement: strict&lt;/span&gt;
    &lt;span class="s"&gt;k8sServiceHost: 127.0.0.1&lt;/span&gt;
    &lt;span class="s"&gt;k8sServicePort: 6443&lt;/span&gt;
    &lt;span class="s"&gt;operator:&lt;/span&gt;
      &lt;span class="s"&gt;replicas: 1&lt;/span&gt;
    &lt;span class="s"&gt;gatewayAPI:&lt;/span&gt;
      &lt;span class="s"&gt;enabled: true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;According to the Cilium documentation, to enable the Gateway API, we need at least the &lt;strong&gt;1.14.5&lt;/strong&gt; Cilium Helm chart with the &lt;code&gt;kubeProxyReplacement&lt;/code&gt; value set to &lt;code&gt;true&lt;/code&gt; and the &lt;code&gt;gatewayAPI&lt;/code&gt; inside the Helm chart definition set to &lt;code&gt;enabled: true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once the remaining steps for the RKE2 installation are complete, we will have a two node RKE2 cluster with Cilium as a CNI.&lt;/p&gt;

&lt;p&gt;Since Kubernetes v.1.26.x does not come with the Gateway API CRDs included, we will need to deploy them manually and let the Cilium containers restart until everything is in a “Running” state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; According to the RKE2 documentation, RKE2 v1.26.12 does not officially support Cilium v1.14.5. The latest supported version is v1.14.4. However, during the demo setup, we did not encounter any issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
NAME            STATUS   ROLES                       AGE    VERSION           INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                              KERNEL-VERSION              CONTAINER-RUNTIME
rke2-master01   Ready    control-plane,etcd,master   111m   v1.26.12+rke2r1                  &amp;lt;none&amp;gt;        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.53-default   containerd://1.7.11-k3s2
rke2-worker01   Ready    &amp;lt;none&amp;gt;                      98m    v1.26.12+rke2r1                  &amp;lt;none&amp;gt;        SUSE Linux Enterprise Server 15 SP5   5.14.21-150500.53-default   containerd://1.7.11-k3s2

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; cilium
cilium-k9vhf                                            1/1     Running     0              111m
cilium-lc7jn                                            1/1     Running     0              99m
cilium-operator-548958b5bf-nc95q                        1/1     Running     5 &lt;span class="o"&gt;(&lt;/span&gt;108m ago&lt;span class="o"&gt;)&lt;/span&gt;   111m
helm-install-rke2-cilium-tp5l6                          0/1     Completed   0              112m

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system get daemonset cilium &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.spec.template.spec.containers[0].image}"&lt;/span&gt;
rancher/mirrored-cilium-cilium:v1.14.5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Install ArgoCD
&lt;/h2&gt;

&lt;p&gt;We will follow the official “Getting Started” guide found &lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noopener noreferrer"&gt;here&lt;/a&gt;, and use the manifest installation option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace argocd
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above will create the argocd Kubernetes namespace and deploy the latest &lt;strong&gt;stable&lt;/strong&gt; manifest. If you would like to install a specific manifest, have a look &lt;a href="https://github.com/argoproj/argo-cd/releases" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods,svc &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          82m
pod/argocd-applicationset-controller-6b67b96c9f-7szsr   1/1     Running   0          82m
pod/argocd-dex-server-c9d4d46b5-mdf67                   1/1     Running   0          82m
pod/argocd-notifications-controller-6975bff68d-ltbkc    1/1     Running   0          82m
pod/argocd-redis-7d8d46cc7f-2br7f                       1/1     Running   0          82m
pod/argocd-repo-server-59f5479b7-dfg9x                  1/1     Running   0          82m
pod/argocd-server-547bf65466-68554                      1/1     Running   0          58m

NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP    PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
service/argocd-applicationset-controller          ClusterIP      10.43.98.162    &amp;lt;none&amp;gt;         7000/TCP,8080/TCP            82m
service/argocd-dex-server                         ClusterIP      10.43.71.44     &amp;lt;none&amp;gt;         5556/TCP,5557/TCP,5558/TCP   82m
service/argocd-metrics                            ClusterIP      10.43.162.177   &amp;lt;none&amp;gt;         8082/TCP                     82m
service/argocd-notifications-controller-metrics   ClusterIP      10.43.55.157    &amp;lt;none&amp;gt;         9001/TCP                     82m
service/argocd-redis                              ClusterIP      10.43.62.79     &amp;lt;none&amp;gt;         6379/TCP                     82m
service/argocd-repo-server                        ClusterIP      10.43.224.205   &amp;lt;none&amp;gt;         8081/TCP,8084/TCP            82m
service/argocd-server                             ClusterIP      10.43.166.25    &amp;lt;none&amp;gt;         80/TCP,443/TCP               82m
service/argocd-server-metrics                     ClusterIP      10.43.165.222   &amp;lt;none&amp;gt;         8083/TCP                     82m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Pre-Work
&lt;/h2&gt;

&lt;p&gt;Before we move on with the Gateway API implementation, we need to create additional Kubernetes resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Argocd TLS Secret
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret tls argocd-server-tls &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;--key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;argocd-key.pem &lt;span class="nt"&gt;--cert&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;argocd.example.com.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above assumes that we have already created a private/public key pair via an available utility. Also, keep in mind that the TLS secret name should be &lt;code&gt;argocd-server-tls&lt;/code&gt; as it will be used at a later point.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cilium IP Pool
&lt;/h3&gt;

&lt;p&gt;In our lab environment, we do not have a tool to hand over &lt;code&gt;Loadbalancer&lt;/code&gt; IP addresses. Therefore, we will use the Cilium &lt;a href="https://docs.cilium.io/en/v1.14/network/lb-ipam/" rel="noopener noreferrer"&gt;LoadBalancer IP Address Management&lt;/a&gt; (LB IPAM).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat ipam-pool.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cilium.io/v2alpha1"&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CiliumLoadBalancerIPPool&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rke2-pool"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cidrs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cidr&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;10.10.10.0/24"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"ipam-pool.yaml"&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ippool
NAME        DISABLED   CONFLICTING   IPS AVAILABLE   AGE
rke2-pool   &lt;span class="nb"&gt;false      &lt;/span&gt;False         253             79m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cilium GatewayClass
&lt;/h3&gt;

&lt;p&gt;If the &lt;code&gt;GatewayClass&lt;/code&gt; resource is not present in the cluster, we have to create one for Cilium. The resource will be used in a later step and while deploying the &lt;code&gt;Gateway&lt;/code&gt;. The &lt;code&gt;GatewayClass&lt;/code&gt; is a template that lets infrastructure providers offer different types of Gateways.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat gatewayclass.yaml&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GatewayClass&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;controllerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;io.cilium/gateway-controller&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"gatewayclass.yaml"&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get gatewayclass
NAME     CONTROLLER                     ACCEPTED   AGE
cilium   io.cilium/gateway-controller   True       104m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Create a Gateway and an HTTPRoute Resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Gateway
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;Gateway&lt;/code&gt; is an instance of the &lt;code&gt;GatewayClass&lt;/code&gt; created above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat argocd_gateway.yaml&lt;/span&gt;

&lt;span class="s"&gt;1 ---&lt;/span&gt;
&lt;span class="na"&gt;2 apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;3 kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Gateway&lt;/span&gt;
&lt;span class="na"&gt;4 metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;5   name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;6   namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;7 spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;8   gatewayClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cilium&lt;/span&gt;
&lt;span class="na"&gt;9   listeners&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;10   - hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd.example.com&lt;/span&gt;
&lt;span class="na"&gt;11     name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-example-com-http&lt;/span&gt;
&lt;span class="na"&gt;12     port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="na"&gt;13     protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTP&lt;/span&gt;
&lt;span class="na"&gt;14   - hostname&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd.example.com&lt;/span&gt;
&lt;span class="na"&gt;15     name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-example-com-https&lt;/span&gt;
&lt;span class="na"&gt;16     port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;443&lt;/span&gt;
&lt;span class="na"&gt;17     protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPS&lt;/span&gt;
&lt;span class="na"&gt;18  tls&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;19    certificateRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;20    - kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;21      name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server-tls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Line 3:&lt;/strong&gt; We define the kind Resource to &lt;code&gt;Gateway&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 6:&lt;/strong&gt; We set the namespace to &lt;code&gt;argocd&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 8:&lt;/strong&gt; We use the name of the &lt;code&gt;GatewayClass&lt;/code&gt; created in the previous step&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 9:&lt;/strong&gt; We define the listeners for the ArgoCD server&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 21:&lt;/strong&gt; We define the TLS secret name created in the previous step&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; In the definition above we use the &lt;code&gt;.example.com&lt;/code&gt; as the Domain however, the value should be replaced with a valid Domain name.&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP Route
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;HTTPRoute&lt;/code&gt; is used to distribute multiple HTTP requests. For example, based on the &lt;code&gt;PathPrefix&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;$ cat argocd_http_route.yaml&lt;/span&gt;

&lt;span class="s"&gt;1 ---&lt;/span&gt;
&lt;span class="na"&gt;2 apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.networking.k8s.io/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;3 kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HTTPRoute&lt;/span&gt;
&lt;span class="na"&gt;4 metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;5   creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
&lt;span class="na"&gt;6   name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;7   namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;8 spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;9   hostnames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="s"&gt;10   - argocd.example.com&lt;/span&gt;
&lt;span class="na"&gt;11   parentRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;12   - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;13   rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;14   - backendRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;15     - name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd-server&lt;/span&gt;
&lt;span class="na"&gt;16       port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;span class="na"&gt;17     matches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;18     - path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;19         type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PathPrefix&lt;/span&gt;
&lt;span class="na"&gt;20         value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/&lt;/span&gt;
&lt;span class="na"&gt;21 status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="na"&gt;22   parents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Line 10:&lt;/strong&gt; We set the hostname we want the ArgoCD Server to get exposed to&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 15:&lt;/strong&gt; We define the name of the ArgoCD server service&lt;/p&gt;

&lt;h3&gt;
  
  
  Apply the Kubernetes Resources
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; argocd_gateway.yaml,argocd_http_route.yaml


&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get gateway,httproute &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                                       CLASS    ADDRESS        PROGRAMMED   AGE
gateway.gateway.networking.k8s.io/argocd   cilium   10.10.10.173   True         9s

NAME                                         HOSTNAMES                AGE
httproute.gateway.networking.k8s.io/argocd   &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"argocd.example.com"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;   9s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Test Time
&lt;/h2&gt;

&lt;p&gt;We want to see if everything works as expected and whether we are able to access the ArgoCD server with the Cilium Gateway API. Let’s perform a CURL request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-kv&lt;/span&gt; https://argocd.example.com
&lt;span class="k"&gt;*&lt;/span&gt;   Trying 10.10.10.173:443...
&lt;span class="k"&gt;*&lt;/span&gt; Connected to argocd.example.com &lt;span class="o"&gt;(&lt;/span&gt;10.10.10.173&lt;span class="o"&gt;)&lt;/span&gt; port 443 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="c"&gt;#0)&lt;/span&gt;
&lt;span class="k"&gt;*&lt;/span&gt; ALPN: offers h2,http/1.1
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;OUT&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Client hello &lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Server hello &lt;span class="o"&gt;(&lt;/span&gt;2&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Encrypted Extensions &lt;span class="o"&gt;(&lt;/span&gt;8&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Certificate &lt;span class="o"&gt;(&lt;/span&gt;11&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, CERT verify &lt;span class="o"&gt;(&lt;/span&gt;15&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Finished &lt;span class="o"&gt;(&lt;/span&gt;20&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;OUT&lt;span class="o"&gt;)&lt;/span&gt;, TLS change cipher, Change cipher spec &lt;span class="o"&gt;(&lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;OUT&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Finished &lt;span class="o"&gt;(&lt;/span&gt;20&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; SSL connection using TLSv1.3 / TLS_CHACHA20_POLY1305_SHA256
&lt;span class="k"&gt;*&lt;/span&gt; ALPN: server did not agree on a protocol. Uses default.
&lt;span class="k"&gt;*&lt;/span&gt; Server certificate:
&lt;span class="k"&gt;*&lt;/span&gt;  subject: &lt;span class="nv"&gt;O&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mkcert development certificate&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;OU&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root@server
&lt;span class="k"&gt;*&lt;/span&gt;  start &lt;span class="nb"&gt;date&lt;/span&gt;: Feb  2 07:11:49 2024 GMT
&lt;span class="k"&gt;*&lt;/span&gt;  expire &lt;span class="nb"&gt;date&lt;/span&gt;: May  2 07:11:49 2026 GMT
&lt;span class="k"&gt;*&lt;/span&gt;  issuer: &lt;span class="nv"&gt;O&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mkcert development CA&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;OU&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;root@server&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;CN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mkcert root@server
&lt;span class="k"&gt;*&lt;/span&gt;  SSL certificate verify result: unable to get &lt;span class="nb"&gt;local &lt;/span&gt;issuer certificate &lt;span class="o"&gt;(&lt;/span&gt;20&lt;span class="o"&gt;)&lt;/span&gt;, continuing anyway.
&lt;span class="k"&gt;*&lt;/span&gt; using HTTP/1.x
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; GET / HTTP/1.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Host: argocd.example.com
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; User-Agent: curl/8.0.1
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Accept: &lt;span class="k"&gt;*&lt;/span&gt;/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Newsession Ticket &lt;span class="o"&gt;(&lt;/span&gt;4&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; TLSv1.3 &lt;span class="o"&gt;(&lt;/span&gt;IN&lt;span class="o"&gt;)&lt;/span&gt;, TLS handshake, Newsession Ticket &lt;span class="o"&gt;(&lt;/span&gt;4&lt;span class="o"&gt;)&lt;/span&gt;:
&lt;span class="k"&gt;*&lt;/span&gt; old SSL session ID is stale, removing
&amp;lt; HTTP/1.1 307 Temporary Redirect
&amp;lt; content-type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
&amp;lt; location: https://argocd.example.com/
&amp;lt; &lt;span class="nb"&gt;date&lt;/span&gt;: Fri, 02 Feb 2024 07:58:21 GMT
&amp;lt; content-length: 63
&amp;lt; x-envoy-upstream-service-time: 0
&amp;lt; server: envoy
&amp;lt; 
&amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://argocd.example.com/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Temporary Redirect&amp;lt;/a&amp;gt;.

&lt;span class="k"&gt;*&lt;/span&gt; Connection &lt;span class="c"&gt;#0 to host argocd.example.com left intact&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the above, it is visible that we are experiencing a well-known issue with 307 redirects. To resolce this, we will need to disable the TLS on the API server. This involves modifying the &lt;code&gt;argocd-cmd-params-cm&lt;/code&gt; ConfigMap in the argocd namespace and setting the &lt;code&gt;server.insecure: “true”&lt;/code&gt;. More information can be found &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/server-commands/additional-configuration-method/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once the changes are performed we need to restart the &lt;code&gt;argocd-server&lt;/code&gt; deployment for the changes to take effect.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout restart deploy argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl rollout status deploy argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd

Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"argocd-server"&lt;/span&gt; rollout to finish: 1 old replicas are pending termination...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;deployment &lt;span class="s2"&gt;"argocd-server"&lt;/span&gt; rollout to finish: 1 old replicas are pending termination...
deployment &lt;span class="s2"&gt;"argocd-server"&lt;/span&gt; successfully rolled out
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us try once again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;curl &lt;span class="nt"&gt;-ki&lt;/span&gt; https://argocd.example.com
HTTP/1.1 200 OK
accept-ranges: bytes
content-length: 788
content-security-policy: frame-ancestors &lt;span class="s1"&gt;'self'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
content-type: text/html&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;utf-8
vary: Accept-Encoding
x-frame-options: sameorigin
x-xss-protection: 1
&lt;span class="nb"&gt;date&lt;/span&gt;: Fri, 02 Feb 2024 13:03:45 GMT
x-envoy-upstream-service-time: 0
server: envoy

&amp;lt;&lt;span class="o"&gt;!&lt;/span&gt;doctype html&amp;gt;&amp;lt;html &lt;span class="nv"&gt;lang&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;&lt;span class="nb"&gt;head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;meta &lt;span class="nv"&gt;charset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"UTF-8"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;title&amp;gt;Argo CD&amp;lt;/title&amp;gt;&amp;lt;base &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;meta &lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"viewport"&lt;/span&gt; &lt;span class="nv"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"width=device-width,initial-scale=1"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"icon"&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"image/png"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"assets/favicon/favicon-32x32.png"&lt;/span&gt; &lt;span class="nv"&gt;sizes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"32x32"&lt;/span&gt;/&amp;gt;&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"icon"&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"image/png"&lt;/span&gt; &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"assets/favicon/favicon-16x16.png"&lt;/span&gt; &lt;span class="nv"&gt;sizes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"16x16"&lt;/span&gt;/&amp;gt;&amp;lt;&lt;span class="nb"&gt;link &lt;/span&gt;&lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"assets/fonts.css"&lt;/span&gt; &lt;span class="nv"&gt;rel&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"stylesheet"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;script &lt;span class="nv"&gt;defer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defer"&lt;/span&gt; &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"main.f14bff1ed334a13aa8c2.js"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/script&amp;gt;&amp;lt;/head&amp;gt;&amp;lt;body&amp;gt;&amp;lt;noscript&amp;gt;&amp;lt;p&amp;gt;Your browser does not support JavaScript. Please &lt;span class="nb"&gt;enable &lt;/span&gt;JavaScript to view the site. Alternatively, Argo CD can be used with the &amp;lt;a &lt;span class="nv"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://argoproj.github.io/argo-cd/cli_installation/"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;Argo CD CLI&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/noscript&amp;gt;&amp;lt;div &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"app"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/div&amp;gt;&amp;lt;/body&amp;gt;&amp;lt;script &lt;span class="nv"&gt;defer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defer"&lt;/span&gt; &lt;span class="nv"&gt;src&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"extensions.js"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;lt;/script&amp;gt;&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great, we received a 200 OK status message!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The &lt;code&gt;-v&lt;/code&gt; short option in the CURL request stands for &lt;code&gt;--verbose&lt;/code&gt;, the &lt;code&gt;-k&lt;/code&gt; short option stands for &lt;code&gt;--insecure&lt;/code&gt;, and the &lt;code&gt;-i&lt;/code&gt; short option is for &lt;code&gt;--include&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The next steps will be to test the above deployment with the &lt;strong&gt;latest Cilium version&lt;/strong&gt; and the &lt;strong&gt;Gateway API v.1.0.0&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cilium Gateway API Lab: &lt;a href="https://isovalent.com/labs/gateway-api/" rel="noopener noreferrer"&gt;https://isovalent.com/labs/gateway-api/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cilium Advanced Gateway API Lab: &lt;a href="https://isovalent.com/labs/advanced-gateway-api-use-cases/" rel="noopener noreferrer"&gt;https://isovalent.com/labs/advanced-gateway-api-use-cases/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Migrate from Ingress to Gateway: &lt;a href="https://docs.cilium.io/en/v1.14/network/servicemesh/ingress-to-gateway/ingress-to-gateway/" rel="noopener noreferrer"&gt;https://docs.cilium.io/en/v1.14/network/servicemesh/ingress-to-gateway/ingress-to-gateway/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RKE2 Installation Methods: &lt;a href="https://docs.rke2.io/install/methods" rel="noopener noreferrer"&gt;https://docs.rke2.io/install/methods&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>tutorial</category>
      <category>argocd</category>
    </item>
    <item>
      <title>5-Step Approach: Dry Run Kubernetes Resources with ProjectSveltos</title>
      <dc:creator>Eleni Grosdouli</dc:creator>
      <pubDate>Sat, 20 Jan 2024 18:26:22 +0000</pubDate>
      <link>https://dev.to/egrosdou/5-step-approach-dry-run-kubernetes-resources-with-projectsveltos-5hhh</link>
      <guid>https://dev.to/egrosdou/5-step-approach-dry-run-kubernetes-resources-with-projectsveltos-5hhh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Independent if you are a DevOps engineer or a Kubernetes administrator, we have all been in a position where we had to update a Kubernetes deployment to a later version in a Production environment. Even if the deployment was tested multiple times in the Test/Staging environment, there was always a fear that something might break or go wrong during the rollout.&lt;/p&gt;

&lt;p&gt;Fear no more! Today, we will explore the &lt;a href="https://projectsveltos.github.io/sveltos/features/dryrun/" rel="noopener noreferrer"&gt;Dry Run&lt;/a&gt; capability of &lt;a href="https://github.com/projectsveltos" rel="noopener noreferrer"&gt;ProjectSveltos&lt;/a&gt; and how it can help engineers and Kubernetes administrators to update deployments in Production environments with more confidence. Kubectl offers a "dry run" functionality, which allows users to simulate the execution of the commands they want to apply. Sveltos takes it one step further. You can launch a simulation of &lt;strong&gt;all the operations&lt;/strong&gt; you would normally execute in a live run. The best part, no actual changes will be made to the matching clusters!&lt;/p&gt;

&lt;p&gt;For today's demonstration, we will update &lt;a href="https://kyverno.io/docs/" rel="noopener noreferrer"&gt;Kyverno&lt;/a&gt; on an &lt;a href="https://docs.rke2.io/" rel="noopener noreferrer"&gt;RKE2&lt;/a&gt; cluster. If you did not have the chance to read the previous post about the Projectsveltos, have a look before you continue with this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf6geqd46n2lgynad8vu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faf6geqd46n2lgynad8vu.jpg" alt="ArgoCD - Sveltos - Kyverno" width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;For this demonstration, I have already installed ArgoCD, deployed Sveltos to cluster04 and created an RKE2 cluster. For the first two, follow step 1 and step 2 from the previous &lt;a href="https://medium.com/@eleni.grosdouli/5-step-approach-projectsveltos-for-kubernetes-add-on-deployment-and-management-on-rke2-be3ba7acb24f" rel="noopener noreferrer"&gt;post&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;- - - - - -+ - - - - - - - - - - + - - - - - - - - - - -+
| Cluster Name | Type | Version |
+ - - - - - - -+ - - - - - - - - - - + - - - - - - - - -+
| cluster04 | Management Cluster  | RKE2 v1.26.11+rke2r1|
| cluster11 | Managed CAPI Cluster| RKE2 v1.26.11+rke2r1|
+ - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Register the RKE2 Cluster and Add Label
&lt;/h2&gt;

&lt;p&gt;We will use the &lt;a href="https://github.com/projectsveltos/sveltosctl" rel="noopener noreferrer"&gt;sveltosctl&lt;/a&gt; to register cluster11 with Sveltos. For the registration, we need three things: a service account, a kubeconfig associated with that account and a namespace. If you are unsure how to create a Service Account and an associated kubeconfig, there is a &lt;a href="https://raw.githubusercontent.com/gianlucam76/scripts/master/get-kubeconfig.sh" rel="noopener noreferrer"&gt;script&lt;/a&gt; publicly available to help you out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Registration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl register cluster &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;projectsveltos &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster11 &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster11.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltosclusters &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos
NAME        READY   VERSION
cluster11   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.11+rke2r1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Cluster Labelling and Verification
&lt;/h3&gt;

&lt;p&gt;To deploy and manage Kubernetes add-ons with the help of Sveltos, the concept of ClusterProfile and cluster labelling comes into play. &lt;a href="https://github.com/projectsveltos/sveltos-manager/blob/main/api/v1alpha1/clusterprofile_types.go" rel="noopener noreferrer"&gt;ClusterProfile&lt;/a&gt; is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.&lt;/p&gt;

&lt;p&gt;For this demonstration, we will set the unique label "env=prod".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label sveltosclusters cluster11 &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltoscluster &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME        READY   VERSION           LABELS
cluster11   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.11+rke2r1   &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;prod,sveltos-agent&lt;span class="o"&gt;=&lt;/span&gt;present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Create Kyverno ClusterProfile
&lt;/h2&gt;

&lt;p&gt;Kyverno Helm chart (v3.0.9) and a Kyverno policy to disallow deployments that use the "latest" tag will get deployed to cluster11. The Helm chart and the Kyverno policy are defined in the same ClusterProfile. To push the Kyverno policy to cluster11, we will have to save the policy as a Configmap and pass it to the ClusterProfile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The configuration is done on &lt;strong&gt;cluster04&lt;/strong&gt; as it is our Sveltos management cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kyverno Policy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget https://raw.githubusercontent.com/kyverno/policies/main/best-practices/disallow-latest-tag/disallow-latest-tag.yaml

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap disallow-latest-tag &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;disallow-latest-tag.yaml

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get cm
NAME                  DATA   AGE
disallow-latest-tag   1      4s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  ClusterProfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
  &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.projectsveltos.io/v1alpha1&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterProfile&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno-disallow-latest-tag&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=prod&lt;/span&gt;
    &lt;span class="na"&gt;helmCharts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repositoryURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kyverno.github.io/kyverno/&lt;/span&gt;
      &lt;span class="na"&gt;repositoryName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno&lt;/span&gt;
      &lt;span class="na"&gt;chartName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno/kyverno&lt;/span&gt;
      &lt;span class="na"&gt;chartVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v3.0.9&lt;/span&gt;
      &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno-latest&lt;/span&gt;
      &lt;span class="na"&gt;releaseNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno&lt;/span&gt;
      &lt;span class="na"&gt;helmChartAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;Install&lt;/span&gt;
    &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disallow-latest-tag&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"clusterprofile_kyverno_disallow_latest.yaml"&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show addons
+--------------------------+---------------+-----------+----------------+---------+-------------------------------+-----------------------------+
|         CLUSTER          | RESOURCE TYPE | NAMESPACE |      NAME      | VERSION |             TIME              |      CLUSTER PROFILES       |
+--------------------------+---------------+-----------+----------------+---------+-------------------------------+-----------------------------+
| projectsveltos/cluster11 | helm chart               | kyverno   | kyverno-latest      | 3.0.9   | 2024-01-20 17:20:21 +0000 UTC | kyverno-disallow-latest-tag |
| projectsveltos/cluster11 | kyverno.io:ClusterPolicy |           | disallow-latest-tag | N/A     | 2024-01-20 17:20:21 +0000 UTC | kyverno-disallow-latest-tag |
+--------------------------+---------------+-----------+----------------+---------+-------------------------------+-----------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification - Cluster11
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; kyverno
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/kyverno-admission-controller-65f76b4f47-gfrwp    1/1     Running   0          75s
pod/kyverno-background-controller-66d498dd5c-xlkcs   1/1     Running   0          75s
pod/kyverno-cleanup-controller-8689db777f-6nd72      1/1     Running   0          75s
pod/kyverno-reports-controller-84fd865d49-gcb2f      1/1     Running   0          75s

NAME                                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;    AGE
service/kyverno-background-controller-metrics   ClusterIP   10.43.63.55     &amp;lt;none&amp;gt;        8000/TCP   75s
service/kyverno-cleanup-controller              ClusterIP   10.43.170.79    &amp;lt;none&amp;gt;        443/TCP    75s
service/kyverno-cleanup-controller-metrics      ClusterIP   10.43.228.47    &amp;lt;none&amp;gt;        8000/TCP   75s
service/kyverno-latest-svc                      ClusterIP   10.43.80.134    &amp;lt;none&amp;gt;        443/TCP    75s
service/kyverno-latest-svc-metrics              ClusterIP   10.43.184.235   &amp;lt;none&amp;gt;        8000/TCP   75s
service/kyverno-reports-controller-metrics      ClusterIP   10.43.41.181    &amp;lt;none&amp;gt;        8000/TCP   75s

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kyverno-admission-controller    1/1     1            1           75s
deployment.apps/kyverno-background-controller   1/1     1            1           75s
deployment.apps/kyverno-cleanup-controller      1/1     1            1           75s
deployment.apps/kyverno-reports-controller      1/1     1            1           75s

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/kyverno-admission-controller-65f76b4f47    1         1         1       75s
replicaset.apps/kyverno-background-controller-66d498dd5c   1         1         1       75s
replicaset.apps/kyverno-cleanup-controller-8689db777f      1         1         1       75s
replicaset.apps/kyverno-reports-controller-84fd865d49      1         1         1       75s

NAME                                                      SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/kyverno-cleanup-admission-reports           &lt;span class="k"&gt;*&lt;/span&gt;/10 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;   False     0        &amp;lt;none&amp;gt;          75s
cronjob.batch/kyverno-cleanup-cluster-admission-reports   &lt;span class="k"&gt;*&lt;/span&gt;/10 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;   False     0        &amp;lt;none&amp;gt;          75s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Update Kyverno to v3.1.4
&lt;/h2&gt;

&lt;p&gt;Just imagine you want to update the Kyverno deployment to the latest version available. We will update the ClusterProfile above to enable the Dry Run capability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Update ClusterProfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
  &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.projectsveltos.io/v1alpha1&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterProfile&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno-disallow-latest-tag&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;syncMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DryRun&lt;/span&gt;
    &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=prod&lt;/span&gt;
    &lt;span class="na"&gt;helmCharts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repositoryURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kyverno.github.io/kyverno/&lt;/span&gt;
      &lt;span class="na"&gt;repositoryName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno&lt;/span&gt;
      &lt;span class="na"&gt;chartName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno/kyverno&lt;/span&gt;
      &lt;span class="na"&gt;chartVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v3.1.4&lt;/span&gt;
      &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno-latest&lt;/span&gt;
      &lt;span class="na"&gt;releaseNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno&lt;/span&gt;
      &lt;span class="na"&gt;helmChartAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;Install&lt;/span&gt;
    &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disallow-latest-tag&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;In the above definition, we added the &lt;code&gt;syncMode: DryRun&lt;/code&gt; to activate Sveltos DryRun mode
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"clusterprofile_kyverno_disallow_latest.yaml"&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show dryrun
+--------------------------+--------------------------+-----------+---------------------+-----------+--------------------------------+-----------------------------+
|         CLUSTER          |      RESOURCE TYPE       | NAMESPACE |        NAME         |  ACTION   |            MESSAGE             |       CLUSTER PROFILE       |
+--------------------------+--------------------------+-----------+---------------------+-----------+--------------------------------+-----------------------------+
| projectsveltos/cluster11 | helm release             | kyverno   | kyverno-latest      | Upgrade   | Current version: &lt;span class="s2"&gt;"3.0.9"&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;      | kyverno-disallow-latest-tag |
|                          |                          |           |                     |           | Would move to version:         |                             |
|                          |                          |           |                     |           | &lt;span class="s2"&gt;"v3.1.4"&lt;/span&gt;                       |                             |
| projectsveltos/cluster11 | kyverno.io:ClusterPolicy |           | disallow-latest-tag | No Action | Object already deployed.       | kyverno-disallow-latest-tag |
|                          |                          |           |                     |           | And policy referenced by       |                             |
|                          |                          |           |                     |           | ClusterProfile has not changed |                             |
|                          |                          |           |                     |           | since last deployment.         |                             |
+--------------------------+--------------------------+-----------+---------------------+-----------+--------------------------------+-----------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the output above, we can observe that only the Kyverno deployment will get updated from v3.0.9 to v3.1.4 and nothing will happen to the already applied Kyverno policy.&lt;/p&gt;

&lt;p&gt;Of course, this is a simplistic example to demonstrate the Dry Run capability. However, imagine having a very large deployment with multiple components and dependencies. Sveltos Dry Run feature can be handy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Deploy Kyverno v3.1.4 to Cluster11
&lt;/h2&gt;

&lt;p&gt;Once we are happy with the changes to be performed on the cluster, it is time to deploy them. To do so, we will have to update the &lt;code&gt;syncMode&lt;/code&gt; variable to &lt;code&gt;Continuous&lt;/code&gt;. The ClusterProfile will look like the below YAML output.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
  &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.projectsveltos.io/v1alpha1&lt;/span&gt;
  &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterProfile&lt;/span&gt;
  &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno-disallow-latest-tag&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;syncMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Continuous&lt;/span&gt;
    &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=prod&lt;/span&gt;
    &lt;span class="na"&gt;helmCharts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repositoryURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kyverno.github.io/kyverno/&lt;/span&gt;
      &lt;span class="na"&gt;repositoryName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno&lt;/span&gt;
      &lt;span class="na"&gt;chartName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno/kyverno&lt;/span&gt;
      &lt;span class="na"&gt;chartVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v3.1.4&lt;/span&gt;
      &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno-latest&lt;/span&gt;
      &lt;span class="na"&gt;releaseNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno&lt;/span&gt;
      &lt;span class="na"&gt;helmChartAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;Install&lt;/span&gt;
    &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;disallow-latest-tag&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"clusterprofile_kyverno_disallow_latest.yaml"&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show usage
+----------------+--------------------+-----------------------------+--------------------------+
| RESOURCE KIND  | RESOURCE NAMESPACE |        RESOURCE NAME        |         CLUSTERS         |
+----------------+--------------------+-----------------------------+--------------------------+
| ClusterProfile |                    | kyverno-disallow-latest-tag | projectsveltos/cluster11 |
| ConfigMap      | default            | disallow-latest-tag         | projectsveltos/cluster11 |
+----------------+--------------------+-----------------------------+--------------------------+

&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show addons
+--------------------------+--------------------------+-----------+---------------------+---------+-------------------------------+-----------------------------+
|         CLUSTER          |      RESOURCE TYPE       | NAMESPACE |        NAME         | VERSION |             TIME              |      CLUSTER PROFILES       |
+--------------------------+--------------------------+-----------+---------------------+---------+-------------------------------+-----------------------------+
| projectsveltos/cluster11 | helm chart               | kyverno   | kyverno-latest      | 3.1.4   | 2024-01-20 17:36:51 +0000 UTC | kyverno-disallow-latest-tag |
| projectsveltos/cluster11 | kyverno.io:ClusterPolicy |           | disallow-latest-tag | N/A     | 2024-01-20 17:36:49 +0000 UTC | kyverno-disallow-latest-tag |
+--------------------------+--------------------------+-----------+---------------------+---------+-------------------------------+-----------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Verify the Kyverno Update
&lt;/h2&gt;

&lt;p&gt;From Sveltos point of view we can clearly see that the Kyverno v3.1.4 has been deployed in the cluster. Let's confirm this is the case from a cluster11 point of view.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; kyverno
NAME                                                           READY   STATUS      RESTARTS   AGE
pod/kyverno-admission-controller-69c4c65769-qdg4w              1/1     Running     0          8m10s
pod/kyverno-background-controller-857c7b7b79-ngkcl             1/1     Running     0          8m10s
pod/kyverno-cleanup-admission-reports-28429540-kkrt4           0/1     Completed   0          5m7s
pod/kyverno-cleanup-cluster-admission-reports-28429540-wjz44   0/1     Completed   0          5m7s
pod/kyverno-cleanup-controller-7c9f487ccd-lrz6j                1/1     Running     0          8m10s
pod/kyverno-reports-controller-7bb7db947-f4ff5                 1/1     Running     0          8m10s

NAME                                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;    AGE
service/kyverno-background-controller-metrics   ClusterIP   10.43.63.55     &amp;lt;none&amp;gt;        8000/TCP   24m
service/kyverno-cleanup-controller              ClusterIP   10.43.170.79    &amp;lt;none&amp;gt;        443/TCP    24m
service/kyverno-cleanup-controller-metrics      ClusterIP   10.43.228.47    &amp;lt;none&amp;gt;        8000/TCP   24m
service/kyverno-latest-svc                      ClusterIP   10.43.80.134    &amp;lt;none&amp;gt;        443/TCP    24m
service/kyverno-latest-svc-metrics              ClusterIP   10.43.184.235   &amp;lt;none&amp;gt;        8000/TCP   24m
service/kyverno-reports-controller-metrics      ClusterIP   10.43.41.181    &amp;lt;none&amp;gt;        8000/TCP   24m

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kyverno-admission-controller    1/1     1            1           24m
deployment.apps/kyverno-background-controller   1/1     1            1           24m
deployment.apps/kyverno-cleanup-controller      1/1     1            1           24m
deployment.apps/kyverno-reports-controller      1/1     1            1           24m

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/kyverno-admission-controller-65f76b4f47    0         0         0       24m
replicaset.apps/kyverno-admission-controller-69c4c65769    1         1         1       8m10s
replicaset.apps/kyverno-background-controller-66d498dd5c   0         0         0       24m
replicaset.apps/kyverno-background-controller-857c7b7b79   1         1         1       8m10s
replicaset.apps/kyverno-cleanup-controller-7c9f487ccd      1         1         1       8m10s
replicaset.apps/kyverno-cleanup-controller-8689db777f      0         0         0       24m
replicaset.apps/kyverno-reports-controller-7bb7db947       1         1         1       8m10s
replicaset.apps/kyverno-reports-controller-84fd865d49      0         0         0       24m

NAME                                                      SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/kyverno-cleanup-admission-reports           &lt;span class="k"&gt;*&lt;/span&gt;/10 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;   False     0        5m7s            24m
cronjob.batch/kyverno-cleanup-cluster-admission-reports   &lt;span class="k"&gt;*&lt;/span&gt;/10 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt;   False     0        5m7s            24m

NAME                                                           COMPLETIONS   DURATION   AGE
job.batch/kyverno-cleanup-admission-reports-28429540           1/1           4s         5m7s
job.batch/kyverno-cleanup-cluster-admission-reports-28429540   1/1           4s         5m7s

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get deploy kyverno-admission-controller &lt;span class="nt"&gt;-n&lt;/span&gt; kyverno &lt;span class="nt"&gt;-o&lt;/span&gt; yaml | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; image
        image: ghcr.io/kyverno/kyverno:v1.11.4
        imagePullPolicy: IfNotPresent
        image: ghcr.io/kyverno/kyvernopre:v1.11.4
        imagePullPolicy: IfNotPresent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Updates just become easier and less stressful with the Dry Run capability of Sveltos. Try it out now!&lt;/p&gt;

&lt;p&gt;👏 Support this project&lt;br&gt;
Every contribution counts! If you enjoyed this article, check out the Projectsveltos GitHub repo. You can star 🌟 the project if you find it helpful.&lt;/p&gt;

&lt;p&gt;The GitHub repo is a great resource for getting started with the project. It contains the code, documentation, and many more examples.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>5-Step Approach: Projectsveltos for Kubernetes Cluster Autoscaler Deployment on Hybrid-Cloud</title>
      <dc:creator>Eleni Grosdouli</dc:creator>
      <pubDate>Fri, 05 Jan 2024 13:24:51 +0000</pubDate>
      <link>https://dev.to/egrosdou/5-step-approach-projectsveltos-for-kubernetes-cluster-autoscaler-deployment-on-a-hybrid-cloud-environment-4kdb</link>
      <guid>https://dev.to/egrosdou/5-step-approach-projectsveltos-for-kubernetes-cluster-autoscaler-deployment-on-a-hybrid-cloud-environment-4kdb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the previous &lt;a href="https://medium.com/@eleni.grosdouli/5-step-approach-projectsveltos-for-kubernetes-add-on-deployment-and-management-on-rke2-be3ba7acb24f" rel="noopener noreferrer"&gt;post&lt;/a&gt;, we talked about the &lt;a href="https://github.com/projectsveltos" rel="noopener noreferrer"&gt;Projectsveltos&lt;/a&gt;, and how it can be installed and used as a way to deploy Kubernetes add-on deployments on different environments, independent if they are on-prem or in the Cloud. The example provided was based on the Rancher Kubernetes Engine Government (&lt;a href="https://docs.rke2.io/" rel="noopener noreferrer"&gt;RKE2&lt;/a&gt;) infrastructure in an on-prem environment.&lt;/p&gt;

&lt;p&gt;Today, we will take a slightly different approach. We will work with a &lt;a href="https://cloud.google.com/learn/what-is-hybrid-cloud" rel="noopener noreferrer"&gt;hybrid cloud&lt;/a&gt; setup. The cluster04 will act as our Sveltos management cluster, and the &lt;a href="https://github.com/kubernetes/autoscaler" rel="noopener noreferrer"&gt;Kubernetes Cluster Autoscaler&lt;/a&gt; will get deployed to a Linode Kubernetes Engine (&lt;a href="https://www.linode.com/docs/api/linode-kubernetes-engine-lke/" rel="noopener noreferrer"&gt;LKE&lt;/a&gt;) cluster with the assistance of Sveltos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66jcsk6ds79r4c1j8dn3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66jcsk6ds79r4c1j8dn3.jpg" alt="ArgoCD - Sveltos - Kubernetes Cluster Autoscaler" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;For this demonstration, I have already installed ArgoCD, deployed Sveltos to cluster04 and created an LKE cluster. For the first two, follow step 1 and step 2 from my previous &lt;a href="https://medium.com/@eleni.grosdouli/5-step-approach-projectsveltos-for-kubernetes-add-on-deployment-and-management-on-rke2-be3ba7acb24f" rel="noopener noreferrer"&gt;post&lt;/a&gt;. If you want to learn how to create an LKE cluster, check out the &lt;a href="https://www.linode.com/docs/products/compute/kubernetes/guides/create-cluster/" rel="noopener noreferrer"&gt;link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; While creating the LKE cluster, the below Node pools created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuyy0qbp9c66it77won7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuyy0qbp9c66it77won7.png" alt="LKE Node Pools" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lab Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- - - - - -+ - - - - - - - - - - + - - - - - - - - - - -+
| Cluster Name | Type | Version |
+ - - - - - - -+ - - - - - - - - - - + - - - - - - - - -+
| cluster04 | Management Cluster | RKE2 v1.26.11+rke2r1 |
| linode-autoscaler | Managed k8s Cluster | k8s v1.27.8 |
+ - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Register the LKE Cluster with Sveltos
&lt;/h2&gt;

&lt;p&gt;Once the LKE cluster is in a "Running" state, we will use the &lt;a href="https://github.com/projectsveltos/sveltosctl" rel="noopener noreferrer"&gt;sveltosctl&lt;/a&gt; to register it. For the registration, we need three things: a &lt;strong&gt;service account&lt;/strong&gt;, a &lt;strong&gt;kubeconfig&lt;/strong&gt; associated with that account and a &lt;strong&gt;namespace&lt;/strong&gt;. If you are unsure how to create a Service Account and an associated kubeconfig, there is a &lt;a href="https://raw.githubusercontent.com/gianlucam76/scripts/master/get-kubeconfig.sh" rel="noopener noreferrer"&gt;script&lt;/a&gt; publicly available to help you out.&lt;/p&gt;

&lt;h3&gt;
  
  
  Registration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl register cluster &lt;span class="nt"&gt;--namespace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;projectsveltos &lt;span class="nt"&gt;--cluster&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linode-autoscaler &lt;span class="nt"&gt;--kubeconfig&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linode-autoscaler.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltoscluster &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos
NAME                READY   VERSION
cluster01           &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.6+rke2r1
linode-autoscaler   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.27.8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Cluster Labelling
&lt;/h2&gt;

&lt;p&gt;To deploy and manage Kubernetes add-ons with the help of Sveltos, the concept of &lt;a href="https://github.com/projectsveltos/sveltos-manager/blob/main/api/v1alpha1/clusterprofile_types.go" rel="noopener noreferrer"&gt;ClusterProfile&lt;/a&gt; and cluster labelling comes into play. ClusterProfile is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.&lt;/p&gt;

&lt;p&gt;For this demonstration, we will set the unique label 'env=autoscaler' as we want to differentiate this cluster from the other already existing cluster01.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltoscluster &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME                READY   VERSION          LABELS
cluster01           &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.6+rke2r1   &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;,sveltos-agent&lt;span class="o"&gt;=&lt;/span&gt;present
linode-autoscaler   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.27.8          sveltos-agent&lt;span class="o"&gt;=&lt;/span&gt;present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add labels
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl label sveltosclusters linode-autoscaler &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;autoscaler &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get sveltoscluster &lt;span class="nt"&gt;-n&lt;/span&gt; projectsveltos &lt;span class="nt"&gt;--show-labels&lt;/span&gt;
NAME                READY   VERSION          LABELS
cluster01           &lt;span class="nb"&gt;true    &lt;/span&gt;v1.26.6+rke2r1   &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt;,sveltos-agent&lt;span class="o"&gt;=&lt;/span&gt;present
linode-autoscaler   &lt;span class="nb"&gt;true    &lt;/span&gt;v1.27.8          &lt;span class="nb"&gt;env&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;autoscaler,sveltos-agent&lt;span class="o"&gt;=&lt;/span&gt;present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 3: Kubernetes Cluster Autoscaler ClusterProfile
&lt;/h2&gt;

&lt;p&gt;The Cluster Autoscaler will get deployed with the Helm chart. Apart from that, we need a Kubernetes secret to store the cloud-config for the Linode environment.&lt;/p&gt;

&lt;p&gt;The whole deployment process is orchestrated and managed by Sveltos. The secret which contains the needed information alongside the Helm chart, will get deployed with the Sveltos ClusterProfile.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The configuration is done on cluster04. Cluster04 is our Sveltos management cluster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Secret
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-autoscaler-cloud-config&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaler&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;cloud-config&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
    &lt;span class="s"&gt;[global]&lt;/span&gt;
    &lt;span class="s"&gt;linode-token={Your own Linode Token}&lt;/span&gt;
    &lt;span class="s"&gt;lke-cluster-id=147978&lt;/span&gt;
    &lt;span class="s"&gt;defaut-min-size-per-linode-type=1&lt;/span&gt;
    &lt;span class="s"&gt;defaut-max-size-per-linode-type=2&lt;/span&gt;

    &lt;span class="s"&gt;[nodegroup "g6-standard-1"]&lt;/span&gt;
    &lt;span class="s"&gt;min-size=1&lt;/span&gt;
    &lt;span class="s"&gt;max-size=2&lt;/span&gt;

    &lt;span class="s"&gt;[nodegroup "g6-standard-2"]&lt;/span&gt;
    &lt;span class="s"&gt;min-size=1&lt;/span&gt;
    &lt;span class="s"&gt;max-size=2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"linode-token":&lt;/strong&gt; This will get replaced with your own token. To generate a Linode token check the link here.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"lke-cluster-id":&lt;/strong&gt; You can get the ID of the LKE cluster with the following CURL request, curl -H "Authorization: Bearer $TOKEN" &lt;a href="https://api.linode.com/v4/lke/clusters" rel="noopener noreferrer"&gt;https://api.linode.com/v4/lke/clusters&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;"nodegroup":&lt;/strong&gt; As mentioned above, the existing LKE cluster has two node pools. The name of the node pools is defined as the nodegroups in the secret above.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The min and max numbers of the nodegroups can be customised based on your needs. As this is a demo environment, I wanted to keep the cost of the deployment as low as possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Secret (cluster04)
&lt;/h3&gt;

&lt;p&gt;Before we can create the secret in a Sveltos ClusterProfile, first we need to create the secret locally and add the type 'addons.projectsveltos.io/cluster-profile'.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create secret generic cluster-autoscaler-cloud-config &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;secret.yaml &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;addons.projectsveltos.io/cluster-profile

&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get secret
NAME                              TYPE                                       DATA   AGE
cluster-autoscaler-cloud-config   addons.projectsveltos.io/cluster-profile   1      2m25s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create ClusterProfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.projectsveltos.io/v1alpha1&lt;/span&gt;
  &lt;span class="s"&gt;kind&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterProfile&lt;/span&gt;
  &lt;span class="s"&gt;metadata&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-autoscaler&lt;/span&gt;
  &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=autoscaler&lt;/span&gt;
    &lt;span class="na"&gt;syncMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Continuous&lt;/span&gt;
    &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deploymentType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Remote&lt;/span&gt;
      &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cluster-autoscaler-cloud-config&lt;/span&gt;
      &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
    &lt;span class="na"&gt;helmCharts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;chartName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaler/cluster-autoscaler&lt;/span&gt;
      &lt;span class="na"&gt;chartVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v9.34.1&lt;/span&gt;
      &lt;span class="na"&gt;helmChartAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install&lt;/span&gt;
      &lt;span class="na"&gt;releaseName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaler-latest&lt;/span&gt;
      &lt;span class="na"&gt;releaseNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaler&lt;/span&gt;
      &lt;span class="na"&gt;repositoryName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;autoscaler&lt;/span&gt;
      &lt;span class="na"&gt;repositoryURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.github.io/autoscaler&lt;/span&gt;
      &lt;span class="na"&gt;helmChartAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;Install&lt;/span&gt;
      &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;autoDiscovery:&lt;/span&gt;
          &lt;span class="s"&gt;clusterName: linode-autoscaler&lt;/span&gt;
        &lt;span class="s"&gt;cloudProvider: linode&lt;/span&gt;
        &lt;span class="s"&gt;extraVolumeSecrets:&lt;/span&gt;
          &lt;span class="s"&gt;cluster-autoscaler-cloud-config:&lt;/span&gt;
            &lt;span class="s"&gt;mountPath: /config&lt;/span&gt;
            &lt;span class="s"&gt;name: cluster-autoscaler-cloud-config&lt;/span&gt;
        &lt;span class="s"&gt;extraArgs:&lt;/span&gt;
          &lt;span class="s"&gt;logtostderr: true&lt;/span&gt;
          &lt;span class="s"&gt;stderrthreshold: info&lt;/span&gt;
          &lt;span class="s"&gt;v: 2&lt;/span&gt;
          &lt;span class="s"&gt;cloud-config: /config/cloud-config&lt;/span&gt;
        &lt;span class="s"&gt;image:&lt;/span&gt;
          &lt;span class="s"&gt;pullPolicy: IfNotPresent&lt;/span&gt;
          &lt;span class="s"&gt;pullSecrets: []&lt;/span&gt;
          &lt;span class="s"&gt;repository: registry.k8s.io/autoscaling/cluster-autoscaler&lt;/span&gt;
          &lt;span class="s"&gt;tag: v1.28.2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;"policyRefs":&lt;/strong&gt; Will create the secret with the name "cluster-autoscaler-cloud-config" in the cluster with the Sveltos tag set to"env=autoscaler" and in the namespace "autoscaler".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; the 'namespace: default' refers to the management cluster. Where &lt;strong&gt;cluter04&lt;/strong&gt; should look for the secret. In this case, it is the default namespace.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;"helmCharts":&lt;/strong&gt; Install the latest Cluster Autoscaler Helm chart&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;"values":&lt;/strong&gt; Overwrite the default Helm values with the ones required from the Linode cloud provider. More details can be found here.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Apply ClusterProfile (cluster04)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"cluster-autoscaler.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show addons
+----------------------------------+---------------+------------+---------------------------------+---------+-------------------------------+--------------------+
|             CLUSTER              | RESOURCE TYPE | NAMESPACE  |              NAME               | VERSION |             TIME              |  CLUSTER PROFILES  |
+----------------------------------+---------------+------------+---------------------------------+---------+-------------------------------+--------------------+
| projectsveltos/linode-autoscaler | helm chart    | autoscaler | autoscaler-latest               | 9.34.1  | 2024-01-05 12:15:54 +0000 UTC | cluster-autoscaler |
| projectsveltos/linode-autoscaler | :Secret       | autoscaler | cluster-autoscaler-cloud-config | N/A     | 2024-01-05 12:16:03 +0000 UTC | cluster-autoscaler |
+----------------------------------+---------------+------------+---------------------------------+---------+-------------------------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification - linode-autoscaler
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;-n&lt;/span&gt; autoscaler
NAME                                                           READY   STATUS    RESTARTS   AGE
autoscaler-latest-linode-cluster-autoscaler-5fd8bfbb6f-r5zf6   1/1     Running   0          5m25s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Cluster Multi Replica Deployment
&lt;/h2&gt;

&lt;p&gt;We will use Sveltos to create a busybox deployment on the LKE cluster with 500 replicas.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Deployment and Configmap Resource (cluster04)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox-workload&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;RollingUpdate&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
        &lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;sh'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;echo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Demo&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Workload&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;sleep&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;600'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create configmap busybox &lt;span class="nt"&gt;--from-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create ClusterProfile for busybox Deployment (cluster04)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config.projectsveltos.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterProfile&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;clusterSelector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;env=autoscaler&lt;/span&gt;
  &lt;span class="na"&gt;policyRefs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;busybox&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"clusterprofile-busybox-autoscaler.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sveltosctl show addons &lt;span class="nt"&gt;--clusterprofile&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;busybox
+--------------------------+-----------------+-----------+------------------+---------+-------------------------------+------------------+
|         CLUSTER          |  RESOURCE TYPE  | NAMESPACE |       NAME       | VERSION |             TIME              | CLUSTER PROFILES |
+--------------------------+-----------------+-----------+------------------+---------+-------------------------------+------------------+
| projectsveltos/linode-autoscaler | apps:Deployment | default   | busybox-workload | N/A     | 2024-01-05 12:25:10 +0000 UTC | busybox          |
+--------------------------+-----------------+-----------+------------------+---------+-------------------------------+------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As the cluster has only two small nodes, it cannot keep up with the load of the 500 replicas. The autoscaler will kick in and add the required size of the nodes available from the pool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubernetes logs autoscaler-latest-linode-cluster-autoscaler-5fd8bfbb6f-r5zf6 &lt;span class="nt"&gt;-n&lt;/span&gt; autoscaler &lt;span class="nt"&gt;-f&lt;/span&gt;

I0103 17:39:49.638775       1 linode_manager.go:84] LKE node group after refresh:
I0103 17:39:49.638804       1 linode_manager.go:86] node group ID g6-standard-1 :&lt;span class="o"&gt;=&lt;/span&gt; min: 1, max: 2, LKEClusterID: 148988, poolOpts: &lt;span class="o"&gt;{&lt;/span&gt;Count:1 Type:g6-standard-1 Disks:[]&lt;span class="o"&gt;}&lt;/span&gt;, associated LKE pools: &lt;span class="o"&gt;{&lt;/span&gt; poolID: 218632, count: 1, &lt;span class="nb"&gt;type&lt;/span&gt;: g6-standard-1, associated linodes: &lt;span class="o"&gt;[&lt;/span&gt;ID: &lt;span class="s2"&gt;"218632-48499cbb0000"&lt;/span&gt;, instanceID: 53677574] &lt;span class="o"&gt;}&lt;/span&gt;
I0103 17:39:49.638826       1 linode_manager.go:86] node group ID g6-standard-2 :&lt;span class="o"&gt;=&lt;/span&gt; min: 1, max: 2, LKEClusterID: 148988, poolOpts: &lt;span class="o"&gt;{&lt;/span&gt;Count:1 Type:g6-standard-2 Disks:[]&lt;span class="o"&gt;}&lt;/span&gt;, associated LKE pools: &lt;span class="o"&gt;{&lt;/span&gt; poolID: 218633, count: 1, &lt;span class="nb"&gt;type&lt;/span&gt;: g6-standard-2, associated linodes: &lt;span class="o"&gt;[&lt;/span&gt;ID: &lt;span class="s2"&gt;"218633-4d4d521d0000"&lt;/span&gt;, instanceID: 53677576] &lt;span class="o"&gt;}&lt;/span&gt;
I0103 17:39:49.639930       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-f558s is unschedulable
I0103 17:39:49.639939       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-rfhqp is unschedulable
I0103 17:39:49.639943       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-wsfcw is unschedulable
I0103 17:39:49.639946       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-25s5c is unschedulable
I0103 17:39:49.639948       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-r7fv4 is unschedulable
I0103 17:39:49.639951       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-nmbxc is unschedulable
I0103 17:39:49.639954       1 klogx.go:87] Pod default/busybox-workload-5d94965f98-dbrh4 is unschedulable
E0103 17:39:49.640116       1 orchestrator.go:450] Couldn&lt;span class="s1"&gt;'t get autoscaling options for ng: g6-standard-1
E0103 17:39:49.640137       1 orchestrator.go:450] Couldn'&lt;/span&gt;t get autoscaling options &lt;span class="k"&gt;for &lt;/span&gt;ng: g6-standard-2
I0103 17:39:49.640587       1 orchestrator.go:185] Best option to resize: g6-standard-1
I0103 17:39:49.641940       1 orchestrator.go:189] Estimated 1 nodes needed &lt;span class="k"&gt;in &lt;/span&gt;g6-standard-1
I0103 17:39:49.641961       1 orchestrator.go:295] Final scale-up plan: &lt;span class="o"&gt;[{&lt;/span&gt;g6-standard-1 1-&amp;gt;2 &lt;span class="o"&gt;(&lt;/span&gt;max: 2&lt;span class="o"&gt;)}]&lt;/span&gt;
I0103 17:39:49.641972       1 executor.go:147] Scale-up: setting group g6-standard-1 size to 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu220a29vktuyjh1iddgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu220a29vktuyjh1iddgn.png" alt="Linode UI - Provisioning Nodes" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Undeploy Busybox
&lt;/h2&gt;

&lt;p&gt;Finally, once the tests are complete, it is time to remove the busybox deployment from the cluster and allow 10 good minutes for the autoscaler to scale down the unneeded nodes. For the undeploy process, we only need to delete the busybox ClusterProfile.&lt;/p&gt;

&lt;h3&gt;
  
  
  Undeploy (LKE)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"clusterprofile-busybox-autoscaler.yaml"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubernetes logs autoscaler-latest-linode-cluster-autoscaler-5fd8bfbb6f-r5zf6 &lt;span class="nt"&gt;-n&lt;/span&gt; autoscaler &lt;span class="nt"&gt;-f&lt;/span&gt;

I0103 18:02:27.179597       1 linode_manager.go:84] LKE node group after refresh:
I0103 18:02:27.179813       1 linode_manager.go:86] node group ID g6-standard-1 :&lt;span class="o"&gt;=&lt;/span&gt; min: 1, max: 2, LKEClusterID: 148988, poolOpts: &lt;span class="o"&gt;{&lt;/span&gt;Count:1 Type:g6-standard-1 Disks:[]&lt;span class="o"&gt;}&lt;/span&gt;, associated LKE pools: &lt;span class="o"&gt;{&lt;/span&gt; poolID: 218632, count: 1, &lt;span class="nb"&gt;type&lt;/span&gt;: g6-standard-1, associated linodes: &lt;span class="o"&gt;[&lt;/span&gt;ID: &lt;span class="s2"&gt;"218632-48499cbb0000"&lt;/span&gt;, instanceID: 53677574] &lt;span class="o"&gt;}&lt;/span&gt;
I0103 18:02:27.179840       1 linode_manager.go:86] node group ID g6-standard-2 :&lt;span class="o"&gt;=&lt;/span&gt; min: 1, max: 2, LKEClusterID: 148988, poolOpts: &lt;span class="o"&gt;{&lt;/span&gt;Count:1 Type:g6-standard-2 Disks:[]&lt;span class="o"&gt;}&lt;/span&gt;, associated LKE pools: &lt;span class="o"&gt;{&lt;/span&gt; poolID: 218633, count: 1, &lt;span class="nb"&gt;type&lt;/span&gt;: g6-standard-2, associated linodes: &lt;span class="o"&gt;[&lt;/span&gt;ID: &lt;span class="s2"&gt;"218633-4d4d521d0000"&lt;/span&gt;, instanceID: 53677576] &lt;span class="o"&gt;}&lt;/span&gt;
I0103 18:02:27.181433       1 static_autoscaler.go:547] No unschedulable pods
I0103 18:02:27.181449       1 pre_filtering_processor.go:67] Skipping lke148988-218632-48499cbb0000 - node group min size reached &lt;span class="o"&gt;(&lt;/span&gt;current: 1, min: 1&lt;span class="o"&gt;)&lt;/span&gt;
I0103 18:02:27.181623       1 pre_filtering_processor.go:67] Skipping lke148988-218633-4d4d521d0000 - node group min size reached &lt;span class="o"&gt;(&lt;/span&gt;current: 1, min: 1&lt;span class="o"&gt;)&lt;/span&gt;
I0103 18:02:36.040964       1 node_instances_cache.go:156] Start refreshing cloud provider node instances cache
I0103 18:02:36.040995       1 node_instances_cache.go:168] Refresh cloud provider node instances cache finished, refresh took 12.52µs
I0103 18:02:38.019810       1 linode_manager.go:84] LKE node group after refresh:
I0103 18:02:38.019983       1 linode_manager.go:86] node group ID g6-standard-2 :&lt;span class="o"&gt;=&lt;/span&gt; min: 1, max: 2, LKEClusterID: 148988, poolOpts: &lt;span class="o"&gt;{&lt;/span&gt;Count:1 Type:g6-standard-2 Disks:[]&lt;span class="o"&gt;}&lt;/span&gt;, associated LKE pools: &lt;span class="o"&gt;{&lt;/span&gt; poolID: 218633, count: 1, &lt;span class="nb"&gt;type&lt;/span&gt;: g6-standard-2, associated linodes: &lt;span class="o"&gt;[&lt;/span&gt;ID: &lt;span class="s2"&gt;"218633-4d4d521d0000"&lt;/span&gt;, instanceID: 53677576] &lt;span class="o"&gt;}&lt;/span&gt;
I0103 18:02:38.020015       1 linode_manager.go:86] node group ID g6-standard-1 :&lt;span class="o"&gt;=&lt;/span&gt; min: 1, max: 2, LKEClusterID: 148988, poolOpts: &lt;span class="o"&gt;{&lt;/span&gt;Count:1 Type:g6-standard-1 Disks:[]&lt;span class="o"&gt;}&lt;/span&gt;, associated LKE pools: &lt;span class="o"&gt;{&lt;/span&gt; poolID: 218632, count: 1, &lt;span class="nb"&gt;type&lt;/span&gt;: g6-standard-1, associated linodes: &lt;span class="o"&gt;[&lt;/span&gt;ID: &lt;span class="s2"&gt;"218632-48499cbb0000"&lt;/span&gt;, instanceID: 53677574] &lt;span class="o"&gt;}&lt;/span&gt;
I0103 18:02:38.021154       1 static_autoscaler.go:547] No unschedulable pods
I0103 18:02:38.021181       1 pre_filtering_processor.go:67] Skipping lke148988-218632-48499cbb0000 - node group min size reached &lt;span class="o"&gt;(&lt;/span&gt;current: 1, min: 1&lt;span class="o"&gt;)&lt;/span&gt;
I0103 18:02:38.021187       1 pre_filtering_processor.go:67] Skipping lke148988-218633-4d4d521d0000 - node group min size reached &lt;span class="o"&gt;(&lt;/span&gt;current: 1, min: 1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffiwiol1gofxuy9dz1rjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffiwiol1gofxuy9dz1rjh.png" alt="Linode UI - Undeploy" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As expected, Sveltos took care of the complete lifecycle of the different Kubernetes deployments in a simple and straightforward manner!&lt;/p&gt;

&lt;p&gt;👏 Support this project&lt;br&gt;
Every contribution counts! If you enjoyed this article, check out the Projectsveltos &lt;a href="https://github.com/projectsveltos" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; repo. You can &lt;a href="https://github.com/projectsveltos/addon-controller" rel="noopener noreferrer"&gt;star 🌟 the project&lt;/a&gt; if you find it helpful.&lt;/p&gt;

&lt;p&gt;The GitHub repo is a great resource for getting started with the project. It contains the code, documentation, and many more examples.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>tutorial</category>
      <category>management</category>
    </item>
    <item>
      <title>Install ArgoCD on RKE2 with Nginx Ingress Controller</title>
      <dc:creator>Eleni Grosdouli</dc:creator>
      <pubDate>Thu, 28 Dec 2023 10:03:05 +0000</pubDate>
      <link>https://dev.to/egrosdou/install-argocd-on-rke2-with-nginx-ingress-controller-28fl</link>
      <guid>https://dev.to/egrosdou/install-argocd-on-rke2-with-nginx-ingress-controller-28fl</guid>
      <description>&lt;p&gt;When you start working with &lt;a href="https://argo-cd.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt;, and with Kubernetes in general, it is not clear what configuration to use to install ArgoCD on an &lt;a href="https://docs.rke2.io/" rel="noopener noreferrer"&gt;RKE2&lt;/a&gt; cluster with the Nginx Controller integrated. ArgoCD is a Kubernetes continuous delivery tool based on the GitOps principles. It can be used as a standalone installation or as part of a CI/CD workflow.&lt;/p&gt;

&lt;p&gt;The blog post aims to provide readers with a step-by-step approach to install ArgoCD as a standalone installation, create an Ingress Kubernetes resource and access the ArgoCD UI locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- - - - - -+ - - - - - - - - - - - + - - - - - - - - - - -+
| Cluster Name |      Type         |      Version         |
+ - - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+
| cluster04 | Management Cluster   | RKE2 v1.26.11+rke2r1 |
+ - - - - - -+ - - - - - - - - - + - - - - - - - - - - - -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- - - - - - - + - - - -+
| Deployment | Version |
+ - - - - - - + - - - -+
| ArgoCD     | v2.9.3 |
| Rancher    | v2.7.9 |
+ - - - - - - + - - - -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Install ArgoCD
&lt;/h2&gt;

&lt;p&gt;Going through the official documentation, there are two ways to install ArgoCD on a cluster, either via the Helm chart or via the manifest files. In our case, we will follow the official "Getting Started" guide found &lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noopener noreferrer"&gt;here&lt;/a&gt; and we will use the manifests approach.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl create namespace argocd
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above will create the "argocd" Kubernetes namespace and deploy the latest stable manifest. If you would like to install a specific manifest, have a look &lt;a href="https://github.com/argoproj/argo-cd/releases" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Validate
&lt;/h3&gt;

&lt;p&gt;Let's validate if all the ArgoCD Kubernetes resource are in a "Running" state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get all &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          15m
pod/argocd-applicationset-controller-5877955b59-2j8fj   1/1     Running   0          15m
pod/argocd-dex-server-6c87968c75-rdnck                  1/1     Running   0          15m
pod/argocd-notifications-controller-64bb8dcf46-6tgnd    1/1     Running   0          15m
pod/argocd-redis-7d8d46cc7f-j5mgj                       1/1     Running   0          15m
pod/argocd-repo-server-665d6b7b59-5qmhs                 1/1     Running   0          15m
pod/argocd-server-7bccc77dd8-v5j2s                      1/1     Running   0          2m52s

NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
service/argocd-applicationset-controller          ClusterIP      10.43.110.123   &amp;lt;none&amp;gt;        7000/TCP,8080/TCP            15m
service/argocd-dex-server                         ClusterIP      10.43.62.176    &amp;lt;none&amp;gt;        5556/TCP,5557/TCP,5558/TCP   15m
service/argocd-metrics                            ClusterIP      10.43.76.103    &amp;lt;none&amp;gt;        8082/TCP                     15m
service/argocd-notifications-controller-metrics   ClusterIP      10.43.129.111   &amp;lt;none&amp;gt;        9001/TCP                     15m
service/argocd-redis                              ClusterIP      10.43.34.24     &amp;lt;none&amp;gt;        6379/TCP                     15m
service/argocd-repo-server                        ClusterIP      10.43.71.49     &amp;lt;none&amp;gt;        8081/TCP,8084/TCP            15m
service/argocd-server                             LoadBalancer   10.43.4.100     x.x.x.x     80:31274/TCP,443:31258/TCP     15m
service/argocd-server-metrics                     ClusterIP      10.43.219.7     &amp;lt;none&amp;gt;        8083/TCP                     15m

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/argocd-applicationset-controller   1/1     1            1           15m
deployment.apps/argocd-dex-server                  1/1     1            1           15m
deployment.apps/argocd-notifications-controller    1/1     1            1           15m
deployment.apps/argocd-redis                       1/1     1            1           15m
deployment.apps/argocd-repo-server                 1/1     1            1           15m
deployment.apps/argocd-server                      1/1     1            1           15m

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/argocd-applicationset-controller-5877955b59   1         1         1       15m
replicaset.apps/argocd-dex-server-6c87968c75                  1         1         1       15m
replicaset.apps/argocd-notifications-controller-64bb8dcf46    1         1         1       15m
replicaset.apps/argocd-redis-7d8d46cc7f                       1         1         1       15m
replicaset.apps/argocd-repo-server-665d6b7b59                 1         1         1       15m
replicaset.apps/argocd-server-5986f74c99                      0         0         0       15m
replicaset.apps/argocd-server-7bccc77dd8                      1         1         1       2m52s

NAME                                             READY   AGE
statefulset.apps/argocd-application-controller   1/1     15m
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Create an Ingress
&lt;/h2&gt;

&lt;p&gt;An Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. An SSL-Pathtrough Ingress example can be found in the official documentation &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#kubernetesingress-nginx" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's have a look at the most important configurations performed on the example file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  1 apiVersion: networking.k8s.io/v1
  2 kind: Ingress
  3 metadata:
  4   name: argocd-server-ingress
  5   namespace: argocd
  6   annotations:
  7     nginx.ingress.kubernetes.io/force-ssl-redirect: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  8     nginx.ingress.kubernetes.io/ssl-passthrough: &lt;span class="s2"&gt;"true"&lt;/span&gt;
  9 spec:
 10   ingressClassName: nginx
 11   rules:
 12   - host: argocd-cluster04.&lt;span class="o"&gt;{&lt;/span&gt;YOUR DOMAIN&lt;span class="o"&gt;}&lt;/span&gt;
 13     http:
 14       paths:
 15       - path: /
 16         pathType: Prefix
 17         backend:
 18           service:
 19             name: argocd-server
 20             port:
 21               name: http
 22   tls:
 23   - hosts:
 24     - argocd-cluster04.&lt;span class="o"&gt;{&lt;/span&gt;YOUR DOMAIN&lt;span class="o"&gt;}&lt;/span&gt;
 25     secretName: argocd-server-tls &lt;span class="c"&gt;# as expected by argocd-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Line 4:&lt;/strong&gt; The Ingress name is defined as "argocd-server-ingress". The name can be anything you want.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 5:&lt;/strong&gt; The ArgoCD resources in Step 1 were created in the "argocd" namespace. If this is the case for your deployment, keep the Ingress resource in the same namespace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 6–8:&lt;/strong&gt; The annotations are used as a way to expose the ArgoCD API server as a single ingress rule and hostname.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The "nginx.ingress.kubernetes.io/ssl-passthrough" annotation, is used to terminate SSL/TLS traffic at the ArgoCD API server instead of the Nginx Ingress Controller&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The "nginx.ingress.kubernetes.io/force-ssl-redirect: "true"" annotation tells the Nginx Ingress Controller to automatically redirect HTTP requests to HTTPS&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the second annotation to be functional, we need to add the argument " - enable-ssl-passthrough" to the Nginx Ingress Controller Daemonset. The Daemonset name on an RKE2 installation is "rke2-ingress-nginx-controller".&lt;/p&gt;

&lt;h3&gt;
  
  
  Update the Nginx Ingress Controller Daemonset
&lt;/h3&gt;

&lt;h4&gt;
  
  
  First Option: Edit the Daemonset
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl edit daemonset rke2-ingress-nginx-controller &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system

  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: rke2-ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rke2-ingress-nginx
        app.kubernetes.io/part-of: rke2-ingress-nginx
        app.kubernetes.io/version: 1.9.3
        helm.sh/chart: rke2-ingress-nginx-4.8.200
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - &lt;span class="nt"&gt;--election-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rke2-ingress-nginx-leader
        - &lt;span class="nt"&gt;--controller-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s.io/ingress-nginx
        - &lt;span class="nt"&gt;--ingress-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
        - &lt;span class="nt"&gt;--configmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;POD_NAMESPACE&lt;span class="si"&gt;)&lt;/span&gt;/rke2-ingress-nginx-controller
        - &lt;span class="nt"&gt;--validating-webhook&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;:8443
        - &lt;span class="nt"&gt;--validating-webhook-certificate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/certificates/cert
        - &lt;span class="nt"&gt;--validating-webhook-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/certificates/key
        - &lt;span class="nt"&gt;--watch-ingress-without-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the argument " - enable-ssl-passthrough" at the end of the argument list. The output should look like the below.&lt;/p&gt;

&lt;h4&gt;
  
  
  Output
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: rke2-ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rke2-ingress-nginx
        app.kubernetes.io/part-of: rke2-ingress-nginx
        app.kubernetes.io/version: 1.9.3
        helm.sh/chart: rke2-ingress-nginx-4.8.200
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - &lt;span class="nt"&gt;--election-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rke2-ingress-nginx-leader
        - &lt;span class="nt"&gt;--controller-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s.io/ingress-nginx
        - &lt;span class="nt"&gt;--ingress-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
        - &lt;span class="nt"&gt;--configmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;POD_NAMESPACE&lt;span class="si"&gt;)&lt;/span&gt;/rke2-ingress-nginx-controller
        - &lt;span class="nt"&gt;--validating-webhook&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;:8443
        - &lt;span class="nt"&gt;--validating-webhook-certificate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/certificates/cert
        - &lt;span class="nt"&gt;--validating-webhook-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/certificates/key
        - &lt;span class="nt"&gt;--watch-ingress-without-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
        - &lt;span class="nt"&gt;--enable-ssl-passthrough&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Second Option: Patch the Daemonset
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl patch daemonset rke2-ingress-nginx-controller &lt;span class="nt"&gt;-n&lt;/span&gt; kube-system &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'json'&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s1"&gt;'[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--enable-ssl-passthrough"}]'&lt;/span&gt;

  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: rke2-ingress-nginx
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: rke2-ingress-nginx
        app.kubernetes.io/part-of: rke2-ingress-nginx
        app.kubernetes.io/version: 1.9.3
        helm.sh/chart: rke2-ingress-nginx-4.8.200
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - &lt;span class="nt"&gt;--election-id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;rke2-ingress-nginx-leader
        - &lt;span class="nt"&gt;--controller-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;k8s.io/ingress-nginx
        - &lt;span class="nt"&gt;--ingress-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nginx
        - &lt;span class="nt"&gt;--configmap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;POD_NAMESPACE&lt;span class="si"&gt;)&lt;/span&gt;/rke2-ingress-nginx-controller
        - &lt;span class="nt"&gt;--validating-webhook&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;:8443
        - &lt;span class="nt"&gt;--validating-webhook-certificate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/certificates/cert
        - &lt;span class="nt"&gt;--validating-webhook-key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/local/certificates/key
        - &lt;span class="nt"&gt;--watch-ingress-without-class&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
        - &lt;span class="nt"&gt;--enable-ssl-passthrough&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Line 12:&lt;/strong&gt; Set the FQDN of your application. As long as your Domain Name System (DNS) can resolve the name defined, it can be anything. Of course, you will have to own a domain for that purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 22–25:&lt;/strong&gt; The "argocd-server-tls" secret is a new self-signed certificated generated with the use of the OpenSSL utility.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Validate the ArgoCD Deployment
&lt;/h2&gt;

&lt;p&gt;In the first two steps, we installed all the needed Kubernetes resources for ArgoCD to be functional and created an Ingress resource to work with the Nginx Ingress Controller to allow ssl-passthrough. Now, it is the big moment, to check if the deployment is actually working.&lt;/p&gt;

&lt;h3&gt;
  
  
  Validate
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods,svc,secret &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/argocd-application-controller-0                     1/1     Running   0          14m
pod/argocd-applicationset-controller-5877955b59-2j8fj   1/1     Running   0          14m
pod/argocd-dex-server-6c87968c75-rdnck                  1/1     Running   0          14m
pod/argocd-notifications-controller-64bb8dcf46-6tgnd    1/1     Running   0          14m
pod/argocd-redis-7d8d46cc7f-j5mgj                       1/1     Running   0          14m
pod/argocd-repo-server-665d6b7b59-5qmhs                 1/1     Running   0          14m
pod/argocd-server-7bccc77dd8-v5j2s                      1/1     Running   0          103s

NAME                                              TYPE           CLUSTER-IP      EXTERNAL-IP   PORT&lt;span class="o"&gt;(&lt;/span&gt;S&lt;span class="o"&gt;)&lt;/span&gt;                      AGE
service/argocd-applicationset-controller          ClusterIP      10.43.110.123   &amp;lt;none&amp;gt;        7000/TCP,8080/TCP            14m
service/argocd-dex-server                         ClusterIP      10.43.62.176    &amp;lt;none&amp;gt;        5556/TCP,5557/TCP,5558/TCP   14m
service/argocd-metrics                            ClusterIP      10.43.76.103    &amp;lt;none&amp;gt;        8082/TCP                     14m
service/argocd-notifications-controller-metrics   ClusterIP      10.43.129.111   &amp;lt;none&amp;gt;        9001/TCP                     14m
service/argocd-redis                              ClusterIP      10.43.34.24     &amp;lt;none&amp;gt;        6379/TCP                     14m
service/argocd-repo-server                        ClusterIP      10.43.71.49     &amp;lt;none&amp;gt;        8081/TCP,8084/TCP            14m
service/argocd-server                             LoadBalancer   10.43.4.100     x.x.x.x     80:31274/TCP,443:31258/TCP     14m
service/argocd-server-metrics                     ClusterIP      10.43.219.7     &amp;lt;none&amp;gt;        8083/TCP                     14m

NAME                                 TYPE                DATA   AGE
secret/argocd-initial-admin-secret   Opaque              1      13m
secret/argocd-notifications-secret   Opaque              0      14m
secret/argocd-secret                 Opaque              3      14m
secret/argocd-server-tls             kubernetes.io/tls   2      105s
&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get ingress &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
NAME                    CLASS   HOSTS                                     ADDRESS                                                                     PORTS     AGE
argocd-server-ingress   nginx   argocd-cluster04.&lt;span class="o"&gt;{&lt;/span&gt;YOUR DOMAIN&lt;span class="o"&gt;}&lt;/span&gt;   cluster04-controller-1,cluster04-controller-2,cluster04-worker-1,cluster04-worker-2  80, 443   4m57s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Access the ArgoCD UI
&lt;/h2&gt;

&lt;p&gt;As long as your DNS is correctly set up, you will resolve the FQDN set during the Ingress configuration. Keep in mind, that the ArgoCD deployment is exposed via the HTTPS protocol over port 443.&lt;/p&gt;

&lt;p&gt;If you do not control the DNS deployment and you want to perform local testing, for Linux and MacOS-based systems, modify the &lt;strong&gt;"/etc/hosts"&lt;/strong&gt; file and add the IP address of a Kubernetes worker node followed by the FQDN. For Windows-based systems, you can modify the &lt;strong&gt;"C:\Windows\System32\Drivers\etc\hosts"&lt;/strong&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;URL:&lt;/strong&gt; &lt;a href="https://argocd-cluster04.%7BYOUR" rel="noopener noreferrer"&gt;https://argocd-cluster04.{YOUR&lt;/a&gt; DOMAIN}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you use a self-sign certificate, your preferred browser will pop up a message about an untrusted connection to a server. In this case and as this is our test environment, you can skip the verification and proceed to the login page of ArgoCD.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmth023l7vkt6wr8tkv1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmth023l7vkt6wr8tkv1.png" alt="ArgoCD UI Login" width="800" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>rke2</category>
      <category>devops</category>
    </item>
    <item>
      <title>5-Step Approach: Projectsveltos for Kubernetes add-on deployment and management on RKE2</title>
      <dc:creator>Eleni Grosdouli</dc:creator>
      <pubDate>Mon, 18 Dec 2023 12:40:57 +0000</pubDate>
      <link>https://dev.to/egrosdou/5-step-approach-projectsveltos-for-kubernetes-add-on-deployment-and-management-on-rke2-35gm</link>
      <guid>https://dev.to/egrosdou/5-step-approach-projectsveltos-for-kubernetes-add-on-deployment-and-management-on-rke2-35gm</guid>
      <description>&lt;p&gt;Working with many different Kubernetes add-on deployments, the actual deployment and management of those across different clusters, on-prem and in the Cloud, can be challenging and sometimes frustrating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/projectsveltos" rel="noopener noreferrer"&gt;Projectsveltos&lt;/a&gt; is a Kubernetes add-on controller that simplifies the deployment and management of different add-ons and applications across multiple clusters (on-prem, Cloud). Sveltos runs in a management cluster and programmatically deploys and manages add-ons and applications on any cluster in the fleet, including the management cluster itself. Sveltos supports many add-on formats, including Helm charts, raw YAML/JSON, Kustomize, Carvel ytt, and Jsonnet.&lt;/p&gt;

&lt;p&gt;In this blog post, we will demonstrate how easy and fast it is to deploy Sveltos on an &lt;a href="https://docs.rke2.io/" rel="noopener noreferrer"&gt;RKE2&lt;/a&gt; cluster with the help of &lt;a href="https://github.com/argoproj/argo-cd" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt;, register two RKE2 Cluster API (&lt;a href="https://github.com/kubernetes-sigs/cluster-api" rel="noopener noreferrer"&gt;CAPI&lt;/a&gt;) clusters and create a ClusterProfile to deploy Prometheus and Grafana Helm charts down the managed CAPI clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bl643bnnwzxj7821f38.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bl643bnnwzxj7821f38.jpg" alt="Projectsveltos Demo Diagram" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;For this demonstration, I have already installed ArgoCD on a central cluster. If you would like to learn more about the ArgoCD installation, go through the official documentation found &lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. If you would like to follow along, below you can find the lab details used.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- - - - - -+ - - - - - - - - - - - + - - - - - - - - - - -+
| Cluster Name |      Type         |      Version         |
+ - - - - - - -+ - - - - - - - - - - + - - - - - - - - - -+
| cluster04 | Management Cluster   | RKE2 v1.26.11+rke2r1 |
| cluster12 | Managed CAPI Cluster | RKE2 v1.26.6+rke2r1  |
| cluster13 | Managed CAPI Cluster | RKE2 v1.26.6+rke2r1  |
+ - - - - - -+ - - - - - - - - - + - - - - - - - - - - - -+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 1: Deploy Sveltos as a Helm Chart cluster04
&lt;/h2&gt;

&lt;p&gt;Sveltos can be deployed either as a &lt;strong&gt;manifest&lt;/strong&gt; or as a &lt;strong&gt;Helm chart&lt;/strong&gt; . For more information about the different installation options, check the link &lt;a href="https://projectsveltos.github.io/sveltos/install/install/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. In my case, I chose to follow the &lt;strong&gt;GitOps&lt;/strong&gt; approach and let ArgoCD deal with the comparison and synchronisation of the Git repository where the code to deploy Sveltos is stored, with the actual running application.&lt;/p&gt;

&lt;p&gt;If you are unsure how to deploy Helm charts with ArgoCD, have a look &lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/helm/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;

&lt;p&gt;After we deploy Sveltos, we want to ensure everything is in a working and fully functional state. This can be achieved either from the ArgoCD UI or from the management cluster itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09zc1946oqzo2qm6vyf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09zc1946oqzo2qm6vyf5.png" alt="ArgoCD - Sveltos Helm Chart Deployment" width="800" height="229"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n projectsveltos

NAME                                        READY   STATUS    RESTARTS   AGE
access-manager-77c7c64477-ns8ml             2/2     Running   0          70s
addon-compliance-manager-7f449d884c-6kgqr   2/2     Running   0          69s
addon-controller-55d7d848ff-ps8l8           2/2     Running   0          70s
classifier-manager-67d6f67d5b-cgpr7         2/2     Running   0          70s
event-manager-69db45b65d-htz5l              2/2     Running   0          70s
hc-manager-5679c69dcc-z6s48                 2/2     Running   0          70s
sc-manager-84dbd64fb4-6hwpf                 2/2     Running   0          70s
shard-controller-56678bcf8c-zjbvc           2/2     Running   0          70s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Install the Sveltosctl
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/projectsveltos/sveltosctl" rel="noopener noreferrer"&gt;Sveltosctl&lt;/a&gt;, is the command-line interface (CLI) for Sveltos. This is an available option to query Sveltos resources and it is available as a &lt;a href="https://projectsveltos.github.io/sveltos/install/sveltosctl/" rel="noopener noreferrer"&gt;Kubernetes pod&lt;/a&gt; or as a &lt;a href="https://github.com/projectsveltos/sveltosctl/releases" rel="noopener noreferrer"&gt;binary&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As I would like to register cluster12 and cluster13 to the Sveltos management cluster, the Sveltosctl as a binary will be used.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Register CAPI Clusters with Sveltos
&lt;/h2&gt;

&lt;p&gt;To register any cluster with Sveltos, you only need three things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A ServiceAccount for Sveltos and a kubeconfig associated with that account;&lt;/li&gt;
&lt;li&gt;A namespace where you want to register the external cluster;&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;Sveltosctl&lt;/strong&gt; should &lt;strong&gt;point&lt;/strong&gt; to the &lt;strong&gt;management cluster&lt;/strong&gt; and then perform the &lt;em&gt;'sveltosctl register cluster'&lt;/em&gt; command.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, if you are unsure how to create a Service Account and an associated kubeconfig, do not worry. There is a script publicly available to create everything you need automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Registration
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sveltosctl register cluster --namespace=projectsveltos --cluster=cluster12 --kubeconfig=cluster12.yaml

$ sveltosctl register cluster --namespace=projectsveltos --cluster=cluster13 --kubeconfig=cluster13.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the commands above, we register cluster12 and cluster13 in the namespace projectsveltos. Of course, you can register the clusters to a namespace of your preference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get sveltosclusters -n projectsveltos

NAME        READY   VERSION
cluster12   true    v1.26.6+rke2r1
cluster13   true    v1.26.6+rke2r1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4: Cluster Labelling
&lt;/h2&gt;

&lt;p&gt;To allow Sveltos to deploy and manage Kubernetes add-ons, the concept of &lt;a href="https://github.com/projectsveltos/sveltos-manager/blob/main/api/v1alpha1/clusterprofile_types.go" rel="noopener noreferrer"&gt;ClusterProfile&lt;/a&gt; and cluster labelling comes into play. ClusterProfile is the &lt;em&gt;CustomerResourceDefinition&lt;/em&gt; used to instruct Sveltos which add-ons to deploy on a set of clusters.&lt;/p&gt;

&lt;p&gt;For this demonstration, we will set the label "env:prod" to both Sveltos clusters. The below commands are executed on the &lt;strong&gt;management cluster&lt;/strong&gt; (cluster04).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get sveltosclusters -n projectsveltos --show-labels

NAME        READY   VERSION          LABELS
cluster12   true    v1.26.6+rke2r1   sveltos-agent=present
cluster13   true    v1.26.6+rke2r1   sveltos-agent=present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl label sveltosclusters cluster12 env=prod -n projectsveltos

$ kubectl label sveltosclusters cluster13 env=prod -n projectsveltos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get sveltosclusters -n projectsveltos --show-labels

NAME        READY   VERSION          LABELS
cluster12   true    v1.26.6+rke2r1   env=prod,sveltos-agent=present
cluster13   true    v1.26.6+rke2r1   env=prod,sveltos-agent=present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: ClusterProfile for Grafana and Prometheus
&lt;/h2&gt;

&lt;p&gt;The below ClusterProfile is an example of a Helm chart deployment of Grafana and Prometheus to Sveltos clusters with the label set to &lt;em&gt;"env:prod"&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: config.projectsveltos.io/v1alpha1
kind: ClusterProfile
metadata:
  name: prometheus-grafana
spec:
  clusterSelector: env=prod
  helmCharts:
  - repositoryURL:    https://prometheus-community.github.io/helm-charts
    repositoryName:   prometheus-community
    chartName:        prometheus-community/prometheus
    chartVersion:     23.4.0
    releaseName:      prometheus
    releaseNamespace: prometheus
    helmChartAction:  Install
  - repositoryURL:    https://grafana.github.io/helm-charts
    repositoryName:   grafana
    chartName:        grafana/grafana
    chartVersion:     6.58.9
    releaseName:      grafana
    releaseNamespace: grafana
    helmChartAction:  Install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Apply the ClusterProfile
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl apply -f "grafana_prometheus.yaml"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the ClusterProfile is applied to the &lt;strong&gt;management cluster&lt;/strong&gt;, the expected result is to have the Grafana and the Prometheus deployment on both managed clusters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sveltosctl show addons

+--------------------------+---------------+------------+------------+---------+-------------------------------+--------------------+
|         CLUSTER          | RESOURCE TYPE | NAMESPACE  |    NAME    | VERSION |             TIME              |  CLUSTER PROFILES  |
+--------------------------+---------------+------------+------------+---------+-------------------------------+--------------------+
| projectsveltos/cluster12 | helm chart    | prometheus | prometheus | 23.4.0  | 2023-12-17 11:25:20 +0100 CET | prometheus-grafana |
| projectsveltos/cluster12 | helm chart    | grafana    | grafana    | 6.58.9  | 2023-12-17 11:25:23 +0100 CET | prometheus-grafana |
| projectsveltos/cluster13 | helm chart    | prometheus | prometheus | 23.4.0  | 2023-12-17 11:25:30 +0100 CET | prometheus-grafana |
| projectsveltos/cluster13 | helm chart    | grafana    | grafana    | 6.58.9  | 2023-12-17 11:25:32 +0100 CET | prometheus-grafana |
+--------------------------+---------------+------------+------------+---------+-------------------------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification - Cluster12
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get pods -n grafana

NAME                           READY   STATUS    RESTARTS   AGE
pod/grafana-78764f9cd6-zsqdx   1/1     Running   0          81s

$ kubectl pods all -n prometheus

NAME                                                     READY   STATUS    RESTARTS   AGE
pod/prometheus-alertmanager-0                            1/1     Running   0          2m3s
pod/prometheus-kube-state-metrics-587bd996f6-l94zq       1/1     Running   0          2m3s
pod/prometheus-prometheus-node-exporter-khw75            1/1     Running   0          2m3s
pod/prometheus-prometheus-pushgateway-75986b9c9f-2ql7v   1/1     Running   0          2m3s
pod/prometheus-server-86c66b89c6-7xk9r                   2/2     Running   0          2m3s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The same verification can be performed for cluster13.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remove Label 'env:prod' cluster12
&lt;/h3&gt;

&lt;p&gt;You might wonder what will happen if we remove the label 'env:prod' from either cluster12 or cluster13. The answer is that Sveltos will identify the missing label 'env:prod' and undeploy the Grafana and the Prometheus deployment from the cluster.&lt;/p&gt;

&lt;p&gt;Let's have a look.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remove Label
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl label sveltosclusters cluster12 env- -n projectsveltos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verification
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ kubectl get sveltosclusters -n projectsveltos --show-labels

NAME        READY   VERSION          LABELS
cluster12   true    v1.26.6+rke2r1   sveltos-agent=present
cluster13   true    v1.26.6+rke2r1   env=prod,sveltos-agent=present
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sveltosctl show addons

+--------------------------+---------------+------------+------------+---------+-------------------------------+--------------------+
|         CLUSTER          | RESOURCE TYPE | NAMESPACE  |    NAME    | VERSION |             TIME              |  CLUSTER PROFILES  |
+--------------------------+---------------+------------+------------+---------+-------------------------------+--------------------+
| projectsveltos/cluster13 | helm chart    | grafana    | grafana    | 6.58.9  | 2023-12-17 11:25:32 +0100 CET | prometheus-grafana |
| projectsveltos/cluster13 | helm chart    | prometheus | prometheus | 23.4.0  | 2023-12-17 11:25:30 +0100 CET | prometheus-grafana |
+--------------------------+---------------+------------+------------+---------+-------------------------------+--------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As expected, Sveltos removed the deployments. The same will happen if we register a new cluster and assign the label 'env:prod'. Sveltos will take care of the complete lifecycle of your Kubernetes deployments in a simple and straightforward manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  👏 Support this project
&lt;/h2&gt;

&lt;p&gt;Every contribution counts! If you enjoyed this article, check out the Projectsveltos &lt;a href="https://github.com/projectsveltos" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; repo. You can star 🌟 the project if you found it helpful.&lt;/p&gt;

&lt;p&gt;The GitHub repo is a great resource for getting started with the project. It contains the code, documentation, and many more examples.&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>opensource</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
