<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aleksey Zhukov</title>
    <description>The latest articles on DEV Community by Aleksey Zhukov (@alezkv).</description>
    <link>https://dev.to/alezkv</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/alezkv"/>
    <language>en</language>
    <item>
      <title>Backup Kubernetes PVC with Restic, and ketchup(k8up)</title>
      <dc:creator>Aleksey Zhukov</dc:creator>
      <pubDate>Thu, 20 Jun 2024 06:58:04 +0000</pubDate>
      <link>https://dev.to/alezkv/backup-kubernetes-pvc-with-restic-and-ketchupk8up-42ln</link>
      <guid>https://dev.to/alezkv/backup-kubernetes-pvc-with-restic-and-ketchupk8up-42ln</guid>
      <description>&lt;p&gt;I am using Bitwarden as a password manager with Vaultwarden as the server implementation. When migrating this valuable data into my homelab Kubernetes setup, I decided to implement proper disaster recovery. As part of the migration, I planned to use a backup/restore procedure to facilitate data movement and validate the recovery process.&lt;/p&gt;

&lt;p&gt;Before diving into the implementation details, let's review our goals and briefly describe the tooling.&lt;/p&gt;

&lt;p&gt;Vaultwarden is running as a pod in a Kubernetes cluster. ArgoCD is responsible for provisioning all Kubernetes resources from a single source of truth. Data is stored within a Persistent Volume Claim (PVC). I'm running this homelab on a Virtual Private Cloud (VPC), so it lacks all the "cloud" features like persistence on EBS or similar services. To ensure peace of mind, I need to be sure that all my passwords are safe in case of an emergency. Therefore, I decided to go with a slightly modified version of the 3-2-1 backup schema: one backup to a local MinIO deployment, and another backup to remote S3-compatible storage.&lt;/p&gt;

&lt;p&gt;Setting up MinIO is beyond the scope of this article, but you can check &lt;a href="https://gist.github.com/alezkv/ac2280dcae300f24495ebb54d44d6d98"&gt;this Gist&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  k8up 101
&lt;/h1&gt;

&lt;p&gt;K8up is a Kubernetes Backup Operator. It's a CNCF Sandbox Open Source project that uses other brilliant open-source software like Restic for handling backups and working with remote storage. It uses custom resources to define backup, restore, and scheduling tasks.&lt;/p&gt;

&lt;p&gt;K8up scans namespaces for matching Persistent Volume Claims (PVCs), creates backup jobs, and mounts the PVCs for Restic to back up to the configured endpoint.&lt;/p&gt;

&lt;p&gt;To create a backup with &lt;a href="//k8up.io"&gt;k8up&lt;/a&gt;, you define a Backup object in YAML, specifying details such as the backend storage and credentials. This configuration is then applied to your Kubernetes cluster using kubectl apply. For regular backups, you create a Schedule object, which outlines the frequency and other parameters for backup, prune, and check jobs.&lt;/p&gt;

&lt;p&gt;Installation instruction can be found on the &lt;a href="https://docs.k8up.io/k8up/how-tos/installation.html"&gt;official site&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Steps
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;Create an empty deployment: This will produce dummy data for the initial backup.&lt;/li&gt;
&lt;li&gt;Create a Backup object: This will produce a Restic repository on the backend of your choice.&lt;/li&gt;
&lt;li&gt;Backup existing data with Restic: This will create a new Restic snapshot and place all data into the backup store.&lt;/li&gt;
&lt;li&gt;Clean up the deployment before restoring (optional: you can restore into a new PVC and point the deployment to it).&lt;/li&gt;
&lt;li&gt;Create a Restore object: This will move data from the backup snapshot into the production PVC.&lt;/li&gt;
&lt;li&gt;Create a Schedule object: This will automate the regular creation of backups.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create an empty deployment
&lt;/h2&gt;

&lt;p&gt;I'll skip this section because your use case will probably be different from mine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Backup object
&lt;/h2&gt;

&lt;p&gt;K8up Backup object and Secret object with credentials for the backend.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-repo&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# password: change_to_strong_password_or_passhprase&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;stringData&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;minio&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;minio123&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8up.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Backup&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backup-dummy&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoPasswordSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;                  &lt;span class="c1"&gt;# (1)&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-repo&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
    &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;                                     &lt;span class="c1"&gt;# (2)&lt;/span&gt;
      &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://minio.minio.svc:9000&lt;/span&gt; &lt;span class="c1"&gt;# (3)&lt;/span&gt;
      &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backups/vaultwarden&lt;/span&gt;           &lt;span class="c1"&gt;# (4)&lt;/span&gt;
      &lt;span class="na"&gt;accessKeyIDSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;                 &lt;span class="c1"&gt;# (5)&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username&lt;/span&gt;
      &lt;span class="na"&gt;secretAccessKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;             &lt;span class="c1"&gt;# (5)&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key parts of the manifest with environment variables used by Restic:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restic repository password reference used to encrypt the backup: &lt;code&gt;RESTIC_PASSWORD&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Restic storage backend type.&lt;/li&gt;
&lt;li&gt;Storage backend endpoint.&lt;/li&gt;
&lt;li&gt;Backup path within the storage backend.&lt;/li&gt;
&lt;li&gt;Credentials reference for the backend.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;RESTIC_REPOSITORY&lt;/code&gt; could be constracted from (2),(3),(4) as such: &lt;code&gt;s3:http://minio.minio.svc:9000/backups/vaultwarden&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Applying these manifests will trigger the Backup procedure by the K8up controller. You have several means to &lt;a href="https://docs.k8up.io/k8up/how-tos/check-status.html"&gt;check the backup status&lt;/a&gt;. You should definitely do so because of a current &lt;a href="https://github.com/k8up-io/k8up/issues/910"&gt;issue #910&lt;/a&gt; with K8up which incorrectly reports status into the Backup object itself.&lt;/p&gt;

&lt;p&gt;Also, a Snapshot object in Kubernetes will be created as a mirror of the Restic snapshot.&lt;/p&gt;

&lt;p&gt;After the issue is fixed, this will be the proper way to check the Backup status.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; vaultwarden describe backups.k8up.io backup-dummy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Name:         backup-dummy
Namespace:    vaultwarden
...
Status:
  Conditions:
    Last Transition Time:  2024-06-19T08:21:37Z
    Message:               no container definitions found
    Reason:                NoPreBackupPodsFound
    Status:                True
    Type:                  PreBackupPodReady
    Last Transition Time:  2024-06-19T08:21:47Z
    Message:               job &lt;span class="s1"&gt;'backup_backup-dummy'&lt;/span&gt; completed successfully
    Reason:                Finished
    Status:                False
    Type:                  Progressing
    Last Transition Time:  2024-06-19T08:21:47Z
    Message:               &lt;span class="s2"&gt;"backup_backup-dummy"&lt;/span&gt; has 1 succeeded, 0 failed, and 0 started &lt;span class="nb"&gt;jobs
    &lt;/span&gt;Reason:                Succeeded
    Status:                True
    Type:                  Completed
    Last Transition Time:  2024-06-19T08:21:47Z
    Message:               Deleted 2 resources
    Reason:                Succeeded
    Status:                True
    Type:                  Scrubbed
  Finished:                &lt;span class="nb"&gt;true
  &lt;/span&gt;Started:                 &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But for now, you must also check and parse Restic logs from the appropriate pod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; vaultwarden get po | &lt;span class="nb"&gt;grep &lt;/span&gt;dummy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; vaultwarden logs &amp;lt;pod_name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can go deeper and list actual files within the snapshot with the Restic CLI. In this example, you need to get access to the backup storage backend.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Port-forward the MinIO port:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; minio port-forward svc/minio 9000:9000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Prepare the environment for Restic:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RESTIC_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;password@local-backup-repo&amp;gt;'&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RESTIC_REPOSITORY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'s3:http://127.0.0.1:9000/backups/vaultwarden'&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;minio
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;minio123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get a list of all snapshots:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;restic snapshots
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;List files in the backup storage:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;restic &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &amp;lt;snapshot_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The last step will show you how files are stored within the backup. We need this information to mimic K8up backup with the Restic CLI in the following steps. We need to know what common path prefix the files in the backup have. It should be constructed as such: &lt;code&gt;/data/&amp;lt;PVC_NAME&amp;gt;&lt;/code&gt;. In my case, it was /data/vaultwarden.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backup existing data with Restic
&lt;/h2&gt;

&lt;p&gt;The most tricky part of this step is to produce a correct Restic snapshot that can be restored by K8up. It requires properly forged paths for backed-up files and access to the storage backend.&lt;/p&gt;

&lt;p&gt;Initially, I was trying to achieve this with Docker. But Restic failed with strange IO errors on backup. So, I switched to Podman. Let's assume that files are stored in &lt;code&gt;~/backup/vaultwarden&lt;/code&gt; on my host. Here are the steps for creating a Restic snapshot with a file structure suited for K8up from local files.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get access to the storage backend:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;-n&lt;/span&gt; minio port-forward svc/minio 9000:9000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Set the environment for Restic as described earlier, altering the repository:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RESTIC_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'&amp;lt;password@local-backup-repo&amp;gt;'&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;RESTIC_REPOSITORY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'s3:http://host.containers.internal:9000/backups/vaultwarden'&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;minio
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;minio123
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trick here is to use &lt;code&gt;host.containers.internal&lt;/code&gt; as the hostname. This name is provided for each Podman container and points to the host IP address. Docker has another name for that purpose: &lt;code&gt;host.docker.internal&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the container with the proper mount and environments:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/backup/vaultwarden
podman run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="nt"&gt;--env-file&lt;/span&gt; &amp;lt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;env&lt;/span&gt; | egrep &lt;span class="s1"&gt;'^AWS_|RESTIC_'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;:/data/vaultwarden &lt;span class="nt"&gt;--entrypoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sh restic/restic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Backup files using the Restic CLI:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;restic backup /data/vaultwarden
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Get the snapshot ID&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In output of previous command you should get created snapshot id. Or you can get all snapshot with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;restic snapshots
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output should have two snapshots. One with dummy data created by K8up and another one with actual data created with the help of Restic by you just now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create a Restore object
&lt;/h2&gt;

&lt;p&gt;By now, we have managed to put data into the backup storage. Let's create a Restore object to get this data into the PVC for production use.&lt;/p&gt;

&lt;p&gt;Restore object mirrors backup and additionally specifies the restore method, which could be either a folder or S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8up.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Restore&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;restore&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;snapshot&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;SNAPTHOST_ID&amp;gt;&lt;/span&gt;
  &lt;span class="na"&gt;restoreMethod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;folder&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vaultwarden&lt;/span&gt;
  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoPasswordSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-repo&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
    &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://minio.minio.svc:9000&lt;/span&gt;
      &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backups/vaultwarden&lt;/span&gt;
      &lt;span class="na"&gt;accessKeyIDSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username&lt;/span&gt;
      &lt;span class="na"&gt;secretAccessKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Schedule object
&lt;/h2&gt;

&lt;p&gt;The final step in our journey will be setting up scheduling, which will combine and automate the following actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;backing up data frequently&lt;/li&gt;
&lt;li&gt;checking the integrity of backup storage&lt;/li&gt;
&lt;li&gt;maintaining an appropriate number of backup versions over time
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;k8up.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Schedule&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;backend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://minio.minio.svc:9000&lt;/span&gt;
      &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backups/vaultwarden&lt;/span&gt;
      &lt;span class="na"&gt;accessKeyIDSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;username&lt;/span&gt;
      &lt;span class="na"&gt;secretAccessKeySecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-creds&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
    &lt;span class="na"&gt;repoPasswordSecretRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local-backup-repo&lt;/span&gt;
      &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;password&lt;/span&gt;
  &lt;span class="na"&gt;backup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;failedJobsHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
    &lt;span class="na"&gt;successfulJobsHistoryLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;check&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;33&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;1'&lt;/span&gt;
  &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;schedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;33&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;2&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
    &lt;span class="na"&gt;retention&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;keepHourly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;14&lt;/span&gt;
      &lt;span class="na"&gt;keepDaily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;14&lt;/span&gt;
      &lt;span class="na"&gt;keepMonthly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In conclusion, implementing a robust backup and recovery strategy for Vaultwarden in a Kubernetes setup is crucial for ensuring the security and availability of your password data. By leveraging the powerful capabilities of K8up and Restic, we can create a reliable and automated backup process that mitigates the risks associated with data loss.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>tutorial</category>
      <category>opensource</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Unfork with ArgoCD</title>
      <dc:creator>Aleksey Zhukov</dc:creator>
      <pubDate>Sat, 13 Jan 2024 18:24:27 +0000</pubDate>
      <link>https://dev.to/alezkv/unfork-with-argocd-33g4</link>
      <guid>https://dev.to/alezkv/unfork-with-argocd-33g4</guid>
      <description>&lt;p&gt;With the help of existing freely available software, we can build a personal or company software stack without starting from scratch, but rather by standing on the shoulders of giants. This eliminates the need to constantly reinvent most parts of our systems. However, sometimes existing off-the-shelf solutions don't provide enough customization to achieve the required goals. In such cases, we face a dilemma: to fork or not to fork. There are reasons for either choice, but today we will explore the "unfork" approach. We will investigate available options with examples and compare them afterward.&lt;/p&gt;

&lt;p&gt;All examples are avaliable in &lt;a href="https://github.com/alezkv/unfork-with-argocd/"&gt;companion repo&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Assumptions regarding the environment and the goal
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes - extensible API server as universal control plane&lt;/li&gt;
&lt;li&gt;ArgoCD - GitOPS controller with monitoring Git repos and apply objects to k8s&lt;/li&gt;
&lt;li&gt;Any third-party off-the-shelf software&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Goal: We need to add a specific Kubernetes (k8s) resource to the ArgoCD application when the source of the application is managed by a third party, without managing a fork of that software.&lt;/p&gt;

&lt;p&gt;In all the described cases, you can either add or overwrite the entire resource. More granular patching is not always available; I'll note this later.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flavors of software distribution
&lt;/h3&gt;

&lt;p&gt;Here is a list of ways to distribute software that occur in the wild, along with corresponding examples.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;plain kubernetes manifests: &lt;a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/#multi-tenant"&gt;ArgoCD&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;kustomize &lt;a href="https://github.com/prometheus-operator/kube-prometheus/blob/main/kustomization.yaml"&gt;Kube Prometheus&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;helm chart &lt;a href="https://github.com/traefik/traefik-helm-chart"&gt;Traefik Ingress&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/application_sources/"&gt;ArgoCD has its own set of supported sources for applications&lt;/a&gt; as such:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kustomize applications&lt;/li&gt;
&lt;li&gt;Helm charts&lt;/li&gt;
&lt;li&gt;A directory of manifests&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How does Renovate fit in here
&lt;/h3&gt;

&lt;p&gt;It is a good practice to keep software up to date. To track changes in upstream software, we can utilize automatic dependency tracking systems such as &lt;a href="https://github.com/dependabot"&gt;Dependabot&lt;/a&gt; or &lt;a href="https://github.com/renovatebot/renovate"&gt;Renovate&lt;/a&gt;. This is a broad topic and requires a separate article to be covered. If you would like to read about it, please vote in the comments section below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Outline
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Multiple Sources for an Application&lt;/li&gt;
&lt;li&gt;Umbrella chart&lt;/li&gt;
&lt;li&gt;Kustomize them all&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Multiple Sources for an Application
&lt;/h3&gt;

&lt;p&gt;Recent version (2.6) of ArgoCD supports spec.sources (plural) on an Application instead of spec.source. This allows you to specify multiple sources, and if they produce the same resource (same group, kind, name, and namespace), the last source to produce the resource will take precedence.&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Because this feature is part of the ArgoCD Application definition, it supports all available ArgoCD application sources out of the box&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Only complete resource overwrites are possible&lt;/li&gt;
&lt;li&gt;This is a beta feature. The UI and CLI still generally behave as if only the first source is specified&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/argoproj/argo-cd/blob/cd4fc97c9dee7b69721bbb577a4f50ba897399c5/ui/src/app/applications/components/application-details/application-details.tsx#L802"&gt;Rollback is not supported for applications with multiple sources&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an example of using a Helm Chart from upstream with our custom values.yaml from Git, along with resources from plain manifests overwriting on top.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ cat apps/_installed/multiple-sources.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: multiple-sources
  namespace: argocd
spec:
  project: default
  sources:
    - chart: authentik
      repoURL: https://charts.goauthentik.io
      targetRevision: 2023.10.5
      helm:
        valueFiles:
          - $values/apps/multiple-sources/values.yaml
    - repoURL: https://github.com/alezkv/unfork-with-argocd
      targetRevision: HEAD
      ref: values
    - repoURL: https://github.com/alezkv/unfork-with-argocd
      path: apps/multiple-sources/resources/
      targetRevision: HEAD
  destination:
    server: "https://kubernetes.default.svc"
    namespace: multiple-sources
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can combine any sources supported by ArgoCD. For example, you could obtain external software from Kustomize, then apply a local Helm Chart, and finally apply a couple of plain manifests on top.&lt;/p&gt;

&lt;h3&gt;
  
  
  Umbrella chart
&lt;/h3&gt;

&lt;p&gt;This technique utilizes the dependency feature of Helm Chart. You can use multiple charts as dependencies and also embed their configurations within an umbrella chart.&lt;/p&gt;

&lt;p&gt;You need to have the Application and the umbrella chart.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ tree apps&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
├── _installed
│   └── chart-umbrella.yaml
└── chart-umbrella
    ├── Chart.yaml
    ├── templates
    │   └── configmap.yaml
    └── values.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$ cat apps/_installed/chart-umbrella.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: chart-umbrella
  namespace: argocd
spec:
  project: default
  sources:
    - repoURL: https://github.com/alezkv/unfork-with-argocd
      path: apps/chart-umbrella/
      targetRevision: HEAD
  destination:
    server: "https://kubernetes.default.svc"
    namespace: chart-umbrella
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$ cat apps/chart-umbrella/Chart.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
version: 1.0.0
name: my-podinfo

dependencies:
  - name: podinfo
    version: 6.5.4
    repository: https://stefanprodan.github.io/podinfo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$ cat apps/chart-umbrella/values.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;podinfo:  # Values for the sub-chart must be under its dependency name key
  replicaCount: 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup will use the upstream chart, apply any specified values to it, and, on top of that, add resources from the umbrella chart's template directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kustomize them all
&lt;/h3&gt;

&lt;p&gt;Kustomize alone deserves a dedicated series; let's try to stick with the unforked approach for now. The whole nature of it is to add, remove, or update configuration options without forking.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hydrate chart with Kustomize
&lt;/h4&gt;

&lt;p&gt;It's possible to render a Helm Chart with Kustomize. This approach allows for additional and highly tunable ways to handle Helm Charts. However, this feature isn't enabled by default and requires custom configuration. You can find more details on this matter &lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/kustomize/#kustomizing-helm-charts"&gt;here&lt;/a&gt; and &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_helmchartinflationgenerator_"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To use Kustomize, you'll need to create a kustomization.yaml file and point the ArgoCD Application to its directory.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$ tree apps&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apps
├── _installed
│   └── kustomize.yaml
└── kustomize
    ├── cloudflare-api-token.yaml
    ├── cloudflare-issuer.yaml
    └── kustomization.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$ cat apps/_installed/kustomize.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: kustomize
  namespace: argocd
spec:
  project: default
  source:
    path: apps/kustomize
    repoURL: https://github.com/alezkv/unfork-with-argocd
    targetRevision: HEAD
  destination:
    namespace: kustomize
    server: https://kubernetes.default.svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;$ cat apps/kustomize/kustomization.yaml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

helmCharts:
  - name: cert-manager
    repo: https://charts.jetstack.io
    releaseName: cert-manager
    namespace: kustomize
    version: v1.13.3
    includeCRDs: true
    valuesInline:
      installCRDs: true

resources:
  - cloudflare-api-token.yaml
  - cloudflare-issuer.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this setup, you can add or replace resources in an existing chart using resources:. This will work the same way as with multiple application sources, overwriting the 'same' resources. However, you can also utilize the full transformation capabilities of Kustomize for more precise resource manipulation. This could include &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/replacements/"&gt;replacement&lt;/a&gt;, &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/patches/"&gt;patches&lt;/a&gt;, or &lt;a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/"&gt;others methods&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Don't forget that the source for resources in Kustomize could be remote Git repositories with their own Kustomize or plain Kubernetes manifests&lt;/p&gt;

&lt;h3&gt;
  
  
  What to pick
&lt;/h3&gt;

&lt;p&gt;Multiple sources in ArgoCD are great for merging separate configurations, best for complete resource overwrites. Umbrella charts, using Helm's dependencies, offer structured management of complex deployments, ideal for hierarchical configuration integration. Kustomize, with its detailed customization capabilities, excels in precise resource manipulation for nuanced adjustments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Kubernetes and ArgoCD offer tremendous flexibility and power for managing containerized applications, but this comes with the responsibility of having a well-defined strategy and vision. Without a clear direction, the complexity of these tools can become a hindrance rather than an advantage. Therefore, it's essential to strike a balance between leveraging the flexibility they provide and maintaining a clear and purposeful approach to application deployment and management&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>beginners</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
