<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: JosephCheng</title>
    <description>The latest articles on DEV Community by JosephCheng (@josephcc).</description>
    <link>https://dev.to/josephcc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/josephcc"/>
    <language>en</language>
    <item>
      <title>Designing a Maintainable GitOps Repo Structure: Managing Multi-Service and Multi-Env with Argo CD + Kro</title>
      <dc:creator>JosephCheng</dc:creator>
      <pubDate>Wed, 14 May 2025 11:10:32 +0000</pubDate>
      <link>https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp</link>
      <guid>https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp</guid>
      <description>&lt;p&gt;&lt;em&gt;From Namespace to ApplicationSet — a Clean, Trackable Setup with instance.yaml&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;📘 This is Part 3 of the “Building a Real and Traceable GitOps Architecture” series.&lt;/p&gt;

&lt;p&gt;👉 Part 1: &lt;a href="https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88"&gt;Why Argo CD Wasn't Enough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 2: &lt;a href="https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g"&gt;From Kro RGD to Full GitOps: How I Built a Clean Deployment Flow with Argo CD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 4: &lt;a href="https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go"&gt;GitOps Promotion with Kargo: Image Tag → Git Commit → Argo Sync&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 5: &lt;a href="https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi"&gt;Implementing a Modular Kargo Promotion Workflow: Extracting PromotionTask from Stage for Maintainability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 6: &lt;a href="https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p"&gt;How I Scaled My GitOps Promotion Flow into a Maintainable Architecture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Part 1, I explained why Argo CD wasn’t enough for my workflow. In Part 2, I shared how I used Kro to produce clean, declarative Deployments.&lt;/p&gt;

&lt;p&gt;In this post, I want to take a step back — because even with the right tools, your GitOps setup can still collapse if the &lt;strong&gt;Git repo structure&lt;/strong&gt; isn’t well-designed.&lt;/p&gt;

&lt;p&gt;Here’s how I organize my repo using ApplicationSet to manage multiple environments and services in a clean, scalable, and maintainable way.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧨 When My GitOps Repo Became a Mess
&lt;/h2&gt;

&lt;p&gt;At first, I managed all the YAML manifests manually. We only had a few services, so I thought it was fine — until it wasn’t.&lt;/p&gt;

&lt;p&gt;YAMLs were added to the root directory. Someone created a &lt;code&gt;deploy-prod/&lt;/code&gt; folder. Someone else copied dev manifests into production and made changes directly.&lt;/p&gt;

&lt;p&gt;There were no naming conventions or directory rules. Every change started with the same question:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;“Wait… which file are we actually deploying?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;One day, a small update accidentally got deployed to two environments at once. I spent the entire afternoon rolling back.&lt;/p&gt;

&lt;p&gt;That’s when I realized I needed a Git structure that could survive real-world GitOps.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ My Setup and What I Wanted to Achieve
&lt;/h2&gt;

&lt;p&gt;This setup runs on a self-managed MicroK8s cluster, and integrates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kro&lt;/strong&gt;: to render clean Deployments, Services, and ConfigMaps&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Argo CD&lt;/strong&gt;: to sync manifests from Git into the cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kargo&lt;/strong&gt;: to promote image updates into Git commits&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I had three goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clearly separate development and production environments&lt;/li&gt;
&lt;li&gt;Allow each service to update independently&lt;/li&gt;
&lt;li&gt;Make every deployment traceable to a Git commit&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📦 Why I Switched to Argo CD ApplicationSet
&lt;/h2&gt;

&lt;p&gt;Originally, I created every Argo CD Application manually. It worked — but as the number of services grew, so did the pain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I had to open the UI and duplicate settings every time&lt;/li&gt;
&lt;li&gt;A single typo could break an entire sync&lt;/li&gt;
&lt;li&gt;There was no consistent pattern to follow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I switched to &lt;strong&gt;ApplicationSet&lt;/strong&gt;. Everything became more consistent and maintainable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One ApplicationSet per namespace&lt;/li&gt;
&lt;li&gt;Automatically generate Applications based on folder structure&lt;/li&gt;
&lt;li&gt;Use annotations to link each service to the correct Kargo Stage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This brought three major benefits:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I no longer needed to create Applications manually&lt;/li&gt;
&lt;li&gt;I could pair instance.yaml + Kro for automatic deployment&lt;/li&gt;
&lt;li&gt;I could bind each service to its promotion logic via annotation (more on this in Part 4)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The annotation also links it directly to the correct Kargo Stage — zero manual setup required.&lt;/p&gt;

&lt;p&gt;While ApplicationSet supports generating apps from multiple Git repos, I chose to keep everything in a single monorepo for easier traceability and promotion logic.&lt;/p&gt;




&lt;h2&gt;
  
  
  🗂 How I Structure My Git Repo
&lt;/h2&gt;

&lt;p&gt;Here’s the directory layout I use in the repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project/
├── argo-applications/
│   ├── develop-applicationset.yaml
│   └── production-applicationset.yaml
├── develop/
│   ├── frontend/
│   │   └── instance.yaml
│   └── backend/
│       └── instance.yaml
└── production/
    └── frontend/
        └── instance.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;argo-applications/&lt;/code&gt;: holds one ApplicationSet config per environment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;develop/&lt;/code&gt; and &lt;code&gt;production/&lt;/code&gt;: each service has its own folder with &lt;code&gt;instance.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;❗ The ResourceGraphDefinition (RGD) isn’t checked into Git — it’s managed on the platform side to keep schema logic separate from environment-specific configuration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure makes it easy to map services, environments, and deployments — and keeps everything traceable.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✍️ A Real ApplicationSet Example
&lt;/h2&gt;

&lt;p&gt;Here’s my actual &lt;code&gt;develop-applicationset.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: develop-applicationset
  namespace: argocd
spec:
  generators:
    - git:
        repoURL: https://gitlab.com/your-name/your-repo.git
        revision: HEAD
        directories:
          - path: develop/*
  template:
    metadata:
      name: '{{path.basename}}-dev-app'
      annotations:
        kargo.akuity.io/authorized-stage: develop:{{path.basename}}-dev-stage
    spec:
      project: develop
      source:
        repoURL: https://gitlab.com/your-name/your-repo.git
        targetRevision: HEAD
        path: '{{path}}'
        directory:
          recurse: true
      destination:
        server: https://kubernetes.default.svc
        namespace: develop
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, if I add a folder like &lt;code&gt;develop/backend/&lt;/code&gt;, a new Argo Application called &lt;code&gt;backend-dev-app&lt;/code&gt; is automatically generated.&lt;/p&gt;

&lt;p&gt;The annotation also links it directly to the correct Kargo Stage — zero manual setup required.&lt;/p&gt;




&lt;h2&gt;
  
  
  🌳 How I Handle the Root App
&lt;/h2&gt;

&lt;p&gt;I don’t store the root App in Git.&lt;/p&gt;

&lt;p&gt;Instead, I create it once manually in the Argo CD UI. Its only job is to point to the &lt;code&gt;argo-applications/&lt;/code&gt; directory and sync all the ApplicationSets inside.&lt;/p&gt;

&lt;p&gt;This gives the UI a single, stable entry point that reflects what’s in Git — easy to reason about and maintain.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧼 Keeping Environments and Services Isolated
&lt;/h2&gt;

&lt;p&gt;Each Kubernetes namespace maps to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One Argo CD Project&lt;/li&gt;
&lt;li&gt;One ApplicationSet&lt;/li&gt;
&lt;li&gt;One Kargo Project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every &lt;code&gt;instance.yaml&lt;/code&gt; lives in an environment-specific path.&lt;/p&gt;

&lt;p&gt;The RGD is shared, but each environment has its own values — so &lt;code&gt;develop&lt;/code&gt; and &lt;code&gt;production&lt;/code&gt; stay completely isolated.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠 How This Structure Helps My Day-to-Day
&lt;/h2&gt;

&lt;p&gt;This repo layout doesn’t just keep things “clean” — it makes my daily workflow smoother:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Need to check a config? Open the service’s folder&lt;/li&gt;
&lt;li&gt;Want to see what changed in a deployment? Run &lt;code&gt;git log&lt;/code&gt; on the &lt;code&gt;instance.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Adding a new service? Just create a folder and add an &lt;code&gt;instance.yaml&lt;/code&gt; - that’s it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While I currently maintain most YAML myself, this structure sets a clear standard for future collaboration and handoff.&lt;/p&gt;

&lt;p&gt;It builds a shared deployment language that’s easy to extend and hard to mess up.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Why instance.yaml Is My Single Source of Truth
&lt;/h2&gt;

&lt;p&gt;Every service’s &lt;code&gt;instance.yaml&lt;/code&gt; is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managed in Git&lt;/li&gt;
&lt;li&gt;Synced automatically via Argo CD&lt;/li&gt;
&lt;li&gt;Updated by Kargo through yaml-update&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In other words: &lt;strong&gt;when this file changes, the deployment changes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No more digging into multiple manifests or chasing sync bugs — one file drives the entire state.&lt;/p&gt;

&lt;p&gt;That’s how I define control in a GitOps setup.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧱 This Structure Is the Foundation for Promotion
&lt;/h2&gt;

&lt;p&gt;At first glance, this post might look like it’s just about organizing folders and automating Argo CD.&lt;/p&gt;

&lt;p&gt;But in reality, this structure is what makes the &lt;strong&gt;entire promotion flow possible.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here’s how Kargo works:&lt;br&gt;
→ Detect a new image tag&lt;br&gt;
→ Create a Freight&lt;br&gt;
→ Update the &lt;code&gt;instance.yaml&lt;/code&gt;&lt;br&gt;
→ Argo CD syncs the commit&lt;br&gt;
→ Kro applies the Deployment&lt;/p&gt;

&lt;p&gt;If file paths, annotations, or Application names aren’t consistent, Kargo has no idea what to promote.&lt;br&gt;
That’s why the Git structure is the foundation of a scalable, traceable GitOps workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔜 Coming Up Next: Promotion with Kargo
&lt;/h2&gt;

&lt;p&gt;With this repo structure in place, I now have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean, declarative Deployments from Kro&lt;/li&gt;
&lt;li&gt;Automated syncing from Argo CD&lt;/li&gt;
&lt;li&gt;Scalable Application generation via ApplicationSet&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But we’re just getting started.&lt;/p&gt;

&lt;p&gt;In the next post, I’ll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How promotion flows from image tag → Freight → instance.yaml → Argo CD → Kro&lt;/li&gt;
&lt;li&gt;How each service links to its own Kargo Stage&lt;/li&gt;
&lt;li&gt;How ApplicationSet annotations enable precise targeting and sync&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you’re designing a GitOps setup or juggling multiple environments and services, I hope this post gives you a solid reference.&lt;br&gt;
If it helped you, feel free to share it or follow the series. And if you’ve built something similar, I’d love to hear how it went.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>kro</category>
      <category>gitops</category>
    </item>
    <item>
      <title>From Kro RGD to Full GitOps: How I Built a Clean Deployment Flow with Argo CD</title>
      <dc:creator>JosephCheng</dc:creator>
      <pubDate>Mon, 12 May 2025 08:25:05 +0000</pubDate>
      <link>https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g</link>
      <guid>https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g</guid>
      <description>&lt;p&gt;&lt;em&gt;From scattered YAML files to a fully traceable GitOps pipeline — here’s how I used Kro to build a cleaner, more maintainable deployment process.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📘 This is Part 2 of the “Building a Real and Traceable GitOps Architecture” series.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 1: &lt;a href="https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88"&gt;Why Argo CD Wasn't Enough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 3: &lt;a href="https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp"&gt;Designing a Maintainable GitOps Repo Structure: Managing Multi-Service and Multi-Env with Argo CD + Kro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 4: &lt;a href="https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go"&gt;GitOps Promotion with Kargo: Image Tag → Git Commit → Argo Sync&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 5: &lt;a href="https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi"&gt;Implementing a Modular Kargo Promotion Workflow: Extracting PromotionTask from Stage for Maintainability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 6: &lt;a href="https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p"&gt;How I Scaled My GitOps Promotion Flow into a Maintainable Architecture&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🧩 Why I Started with an RGD
&lt;/h2&gt;

&lt;p&gt;In my previous article, I shared how managing multiple YAML files became a real burden. Each time I updated an image tag, I had to patch three different manifests — Deployment, Service, and ConfigMap — just to reflect a simple change.&lt;/p&gt;

&lt;p&gt;So I started asking:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What if I could update a single file and have all dependent resources update automatically?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That’s when I found Kro — a declarative GitOps engine that lets me define a service using one &lt;code&gt;ResourceGraphDefinition&lt;/code&gt; (RGD) and a matching &lt;code&gt;instance.yaml&lt;/code&gt;. From there, it automatically generates and applies all necessary Kubernetes resources.&lt;/p&gt;

&lt;p&gt;This post walks through how I implemented that setup, including the actual YAML structure, the pitfalls I hit, and how I connected it with Argo CD and Kargo to build a fully automated GitOps flow.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧠 What’s Kro, and Why RGD?
&lt;/h2&gt;

&lt;p&gt;Kro is a lightweight GitOps templating engine. It’s designed to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Render Kubernetes resources from a template + instance&lt;/li&gt;
&lt;li&gt;Work declaratively with Git as the source of truth&lt;/li&gt;
&lt;li&gt;Cleanly separate schema, templates, and values&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It sounds similar to Helm, but here’s how it differs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No templating syntax&lt;/li&gt;
&lt;li&gt;No chart packaging or release abstraction&lt;/li&gt;
&lt;li&gt;No values.yaml spaghetti&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead, Kro is more transparent and tightly aligned with GitOps principles.&lt;/p&gt;

&lt;p&gt;At the heart of it is the &lt;strong&gt;ResourceGraphDefinition (RGD)&lt;/strong&gt;. Without this file, Kro does nothing. It’s the blueprint that defines which resources are generated and how values flow into them.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠 My First RGD: Starting Simple
&lt;/h2&gt;

&lt;p&gt;I decided to start small — a simple frontend web service.&lt;/p&gt;

&lt;p&gt;It only needed three resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ConfigMap (for &lt;code&gt;API_URL&lt;/code&gt; and &lt;code&gt;TIME_ZONE&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Deployment (for image and replica count)&lt;/li&gt;
&lt;li&gt;Service (to expose a port)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s the schema I wrote for it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  name: string | default=frontend
  namespace: string | default=develop
  values:
    configMap:
      data:
        API_HTTP_URL: string
        TIME_ZONE: string | default="XXX/XXX"
    deployment:
      image: string
      tag: string
      replicas: integer | default=1
    service:
      port: integer | default=3000
      targetPort: integer | default=3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This schema acts as a contract: every value that an instance provides must follow this structure. It’s simple, explicit, and human-readable.&lt;/p&gt;




&lt;h2&gt;
  
  
  📄 Template: How Schema Connects to Resources
&lt;/h2&gt;

&lt;p&gt;With the schema in place, I needed to define what it generates.&lt;/p&gt;

&lt;p&gt;In Kro, templates are added under the &lt;code&gt;resources:&lt;/code&gt; section. Each one has a unique &lt;code&gt;id&lt;/code&gt;, which Kro uses for change tracking.&lt;/p&gt;

&lt;p&gt;Here’s an excerpt from my &lt;code&gt;Deployment&lt;/code&gt; template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- id: deploy
  template:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: ${schema.spec.name}
      namespace: ${schema.spec.namespace}
    spec:
      replicas: ${schema.spec.values.deployment.replicas}
      template:
        spec:
          containers:
            - image: ${schema.spec.values.deployment.image}:${schema.spec.values.deployment.tag}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Helm syntax, no conditional logic — just clean variable references.&lt;/p&gt;

&lt;p&gt;This is what I liked most about Kro: the schema-template-instance structure is clear and composable, without any magic.&lt;/p&gt;




&lt;h2&gt;
  
  
  📦 My &lt;code&gt;instance.yaml&lt;/code&gt;: The Missing Piece
&lt;/h2&gt;

&lt;p&gt;The schema and template define what can be deployed. But Kro won’t do anything until you provide values via an instance.&lt;/p&gt;

&lt;p&gt;Here’s what my &lt;code&gt;instance.yaml&lt;/code&gt; looked like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kro.run/v1alpha1
kind: FrontendAppV2
metadata:
  name: wsp-frontend
  namespace: develop
spec:
  name: wsp-frontend
  namespace: develop
  values:
    configMap:
      data:
        API_HTTP_URL: https://example.com/api
        TIME_ZONE: XXX/XXX
    deployment:
      image: &amp;lt;username&amp;gt;/&amp;lt;your-project&amp;gt;
      tag: "1.0.1"
      replicas: 1
    service:
      port: 3000
      targetPort: 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I defined my schema under the &lt;code&gt;FrontendAppV2&lt;/code&gt; API name, so instances can use that kind and Kro knows how to match them.&lt;/p&gt;

&lt;p&gt;I store this file in Git (under &lt;code&gt;develop/app/&lt;/code&gt;) and sync it using Argo CD.&lt;/p&gt;

&lt;p&gt;This way, I can declaratively define the state of my service through Git alone — Kro and Argo CD take care of the rest.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔁 Full Automation: From Tag → Git → Kro Apply
&lt;/h2&gt;

&lt;p&gt;Here’s how I fully automated the flow using Kargo:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Push a new Docker image to the registry&lt;/li&gt;
&lt;li&gt;Kargo detects it via &lt;code&gt;Warehouse&lt;/code&gt;, creates a &lt;code&gt;Freight&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Stage&lt;/code&gt; triggers a &lt;code&gt;yaml-update&lt;/code&gt; that modifies &lt;code&gt;instance.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Commit + push to Git&lt;/li&gt;
&lt;li&gt;Argo CD detects the change and syncs&lt;/li&gt;
&lt;li&gt;Kro sees the updated instance and renders new resources&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The key part is the &lt;code&gt;yaml-update&lt;/code&gt; step in Kargo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- key: spec.values.deployment.tag
  value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)).Tag) }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each change to the image tag automatically flows into Git, then into Kro, and finally into the cluster.&lt;/p&gt;




&lt;h2&gt;
  
  
  💥 Pitfalls I Ran Into (and How I Fixed Them)
&lt;/h2&gt;

&lt;p&gt;Here are some real-world issues I hit:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1️⃣ Resource didn’t apply, but no error&lt;/strong&gt;&lt;br&gt;
I had a Service template written correctly — but nothing showed up in the cluster.&lt;br&gt;
Turns out the schema was missing a &lt;code&gt;type&lt;/code&gt;, so the value failed to render and Kro skipped the whole resource.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2️⃣ Tag value caused type mismatch&lt;/strong&gt;&lt;br&gt;
My Kargo &lt;code&gt;yaml-update&lt;/code&gt; wrote the tag as a number (1.0.1 → 1), and Kro rejected it.&lt;br&gt;
Fix: wrap the tag in &lt;code&gt;quote()&lt;/code&gt; to force it into a string.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3️⃣ Kro skipped update due to unchanged generation&lt;/strong&gt;&lt;br&gt;
Kro uses generation and delta logic. If the rendered output is identical, it won’t re-apply.&lt;br&gt;
The log says:&lt;br&gt;
&lt;code&gt;Skipping update due to unchanged generation&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4️⃣ Debugging requires watching the logs&lt;/strong&gt;&lt;br&gt;
Kro doesn’t show much in the UI. I rely on controller logs to confirm updates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Found deltas for resource  
Skipping update due to unchanged generation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧭 Where Kro Fits in My GitOps Architecture
&lt;/h2&gt;

&lt;p&gt;Kro is now the &lt;strong&gt;template engine&lt;/strong&gt; of my GitOps setup.&lt;/p&gt;

&lt;p&gt;It’s not just a Helm alternative. It enables me to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Separate structure (&lt;code&gt;schema&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Abstract resource definitions (&lt;code&gt;template&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Provide values through Git (&lt;code&gt;instance.yaml&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Argo CD syncing and Kargo promoting, I now have a full GitOps chain that’s clean, traceable, and reproducible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Docker tag → Git commit → Argo CD sync → Kro apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each deployment is versioned and explainable — no more “mystery state” in the cluster.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔎 Bonus: My Environment Setup
&lt;/h2&gt;

&lt;p&gt;Currently, I’m using this setup in the &lt;code&gt;develop&lt;/code&gt; namespace.&lt;br&gt;
Each environment (dev, staging, prod) gets its own &lt;code&gt;instance.yaml&lt;/code&gt; and Argo CD Application.&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;production&lt;/code&gt;, I plan to use separate Git paths and isolate sync targets.&lt;/p&gt;

&lt;p&gt;More on that in the next article.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔜 Coming Next: Designing a Clean GitOps Repo Structure
&lt;/h2&gt;

&lt;p&gt;In the next part, I’ll show how I organize:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git repo layout (per service + environment)&lt;/li&gt;
&lt;li&gt;ApplicationSet management&lt;/li&gt;
&lt;li&gt;How Kro, Argo CD, and Kargo all connect together&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;If you’re building your own GitOps setup, I hope this post saved you some time — and helped demystify how Kro works behind the scenes.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>kro</category>
      <category>gtiops</category>
    </item>
    <item>
      <title>Why Argo CD Wasn't Enough: Real GitOps Pain and the Tools That Fixed It</title>
      <dc:creator>JosephCheng</dc:creator>
      <pubDate>Thu, 08 May 2025 11:02:55 +0000</pubDate>
      <link>https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88</link>
      <guid>https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88</guid>
      <description>&lt;p&gt;&lt;em&gt;This is the first post in the "Building a Real-World GitOps Setup" series.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 2: &lt;a href="https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g"&gt;From Kro RGD to Full GitOps: How I Built a Clean Deployment Flow with Argo CD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 3: &lt;a href="https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp"&gt;Designing a Maintainable GitOps Repo Structure: Managing Multi-Service and Multi-Env with Argo CD + Kro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 4: &lt;a href="https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go"&gt;GitOps Promotion with Kargo: Image Tag → Git Commit → Argo Sync&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 5: &lt;a href="https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi"&gt;Implementing a Modular Kargo Promotion Workflow: Extracting PromotionTask from Stage for Maintainability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 6: &lt;a href="https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p"&gt;Designing a Maintainable GitOps Architecture: How I Scaled My Promotion Flow from a Simple Line to a System That Withstands Change&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;This series isn't about feature overviews or polished diagrams. It's a real journey - how I started with Argo CD, ran into scaling pains, and ended up building a workflow around Kro and Kargo to simplify the chaos.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Starting Point: Argo CD as Our First GitOps Tool
&lt;/h2&gt;

&lt;p&gt;Our team uses GitLab to manage our projects. Since Argo CD supports Git as a source of truth, it was naturally the first GitOps tool we adopted.&lt;br&gt;
It's stable, visual, and easy to get started with - at least at the beginning.&lt;br&gt;
But as the number of services grew, so did the complexity.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Problem: Too Many YAML Files, Too Much Maintenance
&lt;/h2&gt;

&lt;p&gt;As we added more services, I started to feel like managing YAML was getting out of hand.&lt;br&gt;
Each service had its own Deployment, Service, and ConfigMap. Updating an image tag or tweaking an environment variable meant editing three different files and submitting three PRs just to change a single port.&lt;/p&gt;

&lt;p&gt;That felt wrong.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Isn't there a way to change one file and have everything else update accordingly?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I wanted to stop maintaining three YAML files just to represent "this service is now version X."&lt;/p&gt;

&lt;p&gt;What I really needed was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single instance.yaml per service&lt;/li&gt;
&lt;li&gt;Define tag, port, and env vars in one place&lt;/li&gt;
&lt;li&gt;Use that to generate the actual K8s manifests&lt;/li&gt;
&lt;li&gt;Let Argo CD keep syncing, while keeping the repo clean&lt;/li&gt;
&lt;/ul&gt;


&lt;h2&gt;
  
  
  The Solution Appears: Discovering Kro
&lt;/h2&gt;

&lt;p&gt;At this point, I started looking for tools that could turn high-level service definitions into manifests automatically.&lt;br&gt;
That's when I found Kro&lt;/p&gt;

&lt;p&gt;Here's what I was trying to simplify:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;frontend/
├── deployment.yaml
├── service.yaml
└── configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;backend/
├── deployment.yaml
├── service.yaml
└── configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;frontend/
└── instance.yaml → Kro → generated manifests
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Example &lt;code&gt;instance.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  values:
    deployment:
      tag: 0320.1
      port: 3000
      image: frontend
    config:
      LOG_LEVEL: debug
      API_URL: https://api.example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just editing this one file could trigger all necessary changes.&lt;br&gt;
It felt clean, declarative, and scalable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Chose Kro Instead of Writing Raw YAML
&lt;/h2&gt;

&lt;p&gt;At this point, I didn’t want to manage dozens of YAML files by hand anymore.&lt;/p&gt;

&lt;p&gt;What I really needed was a way to &lt;strong&gt;describe the intent of a service&lt;/strong&gt; — not duplicate the same boilerplate across deployments, services, and configmaps.&lt;/p&gt;

&lt;p&gt;I didn’t want to turn my GitOps repo into a config sprawl.&lt;br&gt;&lt;br&gt;
I wanted something:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Declarative, but still readable
&lt;/li&gt;
&lt;li&gt;Focused on service logic, not YAML mechanics
&lt;/li&gt;
&lt;li&gt;Easy to compose and integrate with Argo CD
&lt;/li&gt;
&lt;li&gt;Able to reason about dependencies between resources
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s when Kro clicked.&lt;/p&gt;

&lt;p&gt;With a single &lt;code&gt;instance.yaml&lt;/code&gt; per service, I could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define tag, port, and config in one place
&lt;/li&gt;
&lt;li&gt;Use a &lt;code&gt;ResourceGraphDefinition&lt;/code&gt; to render full Kubernetes manifests
&lt;/li&gt;
&lt;li&gt;Let Argo CD do the syncing, while I focused on intent
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And one design choice I found especially elegant:&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Kro builds a Directed Acyclic Graph (DAG)&lt;/strong&gt; from your defined resources, which ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All referenced values are valid and resolvable
&lt;/li&gt;
&lt;li&gt;Dependencies (like Service → Deployment → ConfigMap) follow a safe apply order
&lt;/li&gt;
&lt;li&gt;There's no cyclic reference that might cause unpredictable state
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure gave me both &lt;strong&gt;clarity and safety&lt;/strong&gt; — I could think in terms of service logic, while Kro guaranteed a consistent deployment process behind the scenes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kro also validates this structure as part of its ResourceGraphDefinition processing — catching circular references, invalid types, and broken dependencies early.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR - The GitOps Workflow in One Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;The full GitOps flow: instance files drive Kro, image tags trigger Kargo, and Argo CD keeps the cluster in sync.&lt;br&gt;
A workflow where instance files define the service setup, and everything else flows from that.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  So What's Next?
&lt;/h2&gt;

&lt;p&gt;This post was all about motivation and setup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next, I'll show you how I wrote my first &lt;code&gt;ResourceGraphDefinition&lt;/code&gt; (RGD)&lt;/strong&gt;-the template Kro uses to render Deployments, Services, and ConfigMaps. I'll walk through its structure, the pitfalls I encountered, and how I got it to render real, production-ready manifests.&lt;/p&gt;

&lt;p&gt;If you've ever patched the same &lt;code&gt;deployment.yaml&lt;/code&gt; three times just to bump an image tag, this series is for you.&lt;/p&gt;

&lt;p&gt;Follow along as I break down the entire GitOps stack, one layer at a time.&lt;br&gt;
Thanks for reading 🙇&lt;/p&gt;

&lt;p&gt;🧠 If you're also trying to simplify GitOps or reduce YAML fatigue, I'd love to hear how you're approaching it.&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>argocd</category>
    </item>
    <item>
      <title>GitOps Promotion with Kargo: Image Tag Git Commit Argo Sync</title>
      <dc:creator>JosephCheng</dc:creator>
      <pubDate>Tue, 06 May 2025 14:10:57 +0000</pubDate>
      <link>https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go</link>
      <guid>https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go</guid>
      <description>&lt;p&gt;&lt;strong&gt;Designing a GitOps-native promotion pipeline from image tag to deployment — traceable, controllable, and rollback-friendly&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📘 This is Part 4 of my GitOps Architecture series.&lt;/p&gt;

&lt;p&gt;This series was originally written and published in a linear progression (Part 1 to 6).&lt;br&gt;&lt;br&gt;
On Dev.to, I’m republishing it starting from the final system design (Part 6), then tracing backward to how it was built — from system to source.&lt;/p&gt;

&lt;p&gt;👉 Part 1: &lt;a href="https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88"&gt;Why Argo CD Wasn’t Enough&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 2: &lt;a href="https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g"&gt;From Kro RGD to Full GitOps: How I Built a Clean Deployment Flow with Argo CD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 3: &lt;a href="https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp"&gt;Designing a Maintainable GitOps Repo Structure: Managing Multi-Service and Multi-Env with Argo CD + Kro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 5: &lt;a href="https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi"&gt;Implementing a Modular Kargo Promotion Workflow: Extracting PromotionTask from Stage for Maintainability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 6: &lt;a href="https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p"&gt;How I Scaled My GitOps Promotion Flow into a Maintainable Architecture&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the previous articles, I explained why Argo CD wasn’t enough for my needs, how Kro helped structure my deployment logic, and how I designed the Git repository layout.&lt;br&gt;
Now it’s time to focus on the core of any GitOps workflow — &lt;strong&gt;promotion&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;How do you go from a new image tag to a Deployment update in a way that’s traceable, controllable, and rollbackable?&lt;/p&gt;


&lt;h2&gt;
  
  
  🧩 What Do I Mean by “Promotion”?
&lt;/h2&gt;

&lt;p&gt;When a new image tag is pushed, I want that version to be validated, written into Git, and deployed to Kubernetes.&lt;br&gt;
Not triggered by CI scripts. Not patched via webhook.&lt;/p&gt;

&lt;p&gt;This is a Git-first, GitOps-native promotion workflow — where everything flows through Git.&lt;/p&gt;


&lt;h2&gt;
  
  
  1. Why Is Promotion the Most Critical Part of GitOps?
&lt;/h2&gt;

&lt;p&gt;This continues the story from the last three posts:&lt;br&gt;
Argo CD limitations → Kro introduction → GitOps structure emerging&lt;/p&gt;

&lt;p&gt;The instance.yaml file we discussed earlier doesn’t yet handle promotion. But promotion is one of the most frequent — and risky — operations in any GitOps flow.&lt;/p&gt;

&lt;p&gt;🔁 It happens all the time. And if it’s not designed well, it becomes the first thing to go wrong.&lt;/p&gt;


&lt;h2&gt;
  
  
  2. What Happens Without a Proper Promotion Process?
&lt;/h2&gt;

&lt;p&gt;At first, I manually edited the tag in instance.yaml and let Argo CD sync it. It worked — but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy to mistype or forget to commit&lt;/li&gt;
&lt;li&gt;No version record — rollback relies on memory&lt;/li&gt;
&lt;li&gt;No condition control — services might get silently upgraded&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also considered scripting this via CI, but it would spread logic across pipelines. Hard to trace. Harder to revert.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Not Use Argo CD Image Updater?&lt;/strong&gt;&lt;br&gt;
Yes, I looked into &lt;a href="https://argocd-image-updater.readthedocs.io/en/stable/" rel="noopener noreferrer"&gt;Argo CD’s image updater plugin&lt;/a&gt;.&lt;br&gt;
It auto-detects new image tags and patches Helm values or manifests.&lt;/p&gt;

&lt;p&gt;That’s convenient, but it didn’t meet my GitOps criteria:&lt;/p&gt;

&lt;p&gt;✅ Detects new tags&lt;br&gt;
❌ Doesn’t commit back to Git → no history&lt;br&gt;
❌ Doesn’t support conditional promotion (e.g. “only after tests pass”)&lt;br&gt;
❌ Hard to coordinate promotion across services at scale&lt;/p&gt;

&lt;p&gt;It’s great for background automation.&lt;br&gt;
But I wanted explicit control, Git history, and safe rollback.&lt;/p&gt;

&lt;p&gt;🎯 What I Really Wanted Was:&lt;br&gt;
A clean, observable promotion pipeline:&lt;br&gt;
Image tag → Git commit → Deploy&lt;/p&gt;

&lt;p&gt;Everything versioned. Everything traceable. Everything rollbackable.&lt;/p&gt;


&lt;h2&gt;
  
  
  3. Why I Chose Kargo
&lt;/h2&gt;

&lt;p&gt;I didn’t want promotion logic to live inside CI scripts or webhook handlers — not because it can’t work, but because I preferred a Git-native flow where promotion history lives entirely in Git.&lt;/p&gt;

&lt;p&gt;I wanted Git to be the &lt;strong&gt;source of truth&lt;/strong&gt; and the engine for promotion.&lt;/p&gt;

&lt;p&gt;Kargo gave me exactly that.&lt;/p&gt;

&lt;p&gt;✅ Starts with an image tag → auto-creates a &lt;strong&gt;Freight&lt;/strong&gt;&lt;br&gt;
✅ Defines promotion logic with &lt;strong&gt;Stage&lt;/strong&gt;&lt;br&gt;
✅ Updates Git via &lt;code&gt;yaml-update → git-commit → git-push&lt;/code&gt;&lt;br&gt;
✅ Integrates with &lt;strong&gt;Argo CD&lt;/strong&gt; + &lt;strong&gt;Kro&lt;/strong&gt;&lt;br&gt;
✅ Supports SemVer (I use tags like &lt;code&gt;1.2.3&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;And best of all — Kargo updates Git, which then triggers Argo CD, and Kro renders &lt;code&gt;instance.yaml&lt;/code&gt; into Kubernetes manifests.&lt;/p&gt;


&lt;h2&gt;
  
  
  4. Thinking in State Machines
&lt;/h2&gt;

&lt;p&gt;You don’t need to think of promotion as a state machine — but if you do, it’s a surprisingly elegant way to model the logic behind Kargo.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A new image tag is pushed → this acts as an &lt;strong&gt;event&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Kargo creates a Freight → a signal representing that new version&lt;/li&gt;
&lt;li&gt;A Stage receives that Freight → evaluates the promotion logic(current &lt;strong&gt;state&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;If conditions are satisfied → a transition is triggered&lt;/li&gt;
&lt;li&gt;That transition runs a &lt;strong&gt;PromotionTask&lt;/strong&gt; → updates YAML → commits → to Git&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And just like a finite state machine, this transition is deterministic — based on input (Freight), current state (Stage), and declared logic (Task).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;📌 This replaces scattered CI scripts and conditional logic with a clean, declarative state transition — fully visible in Git history.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhgm7uga15k2irecz2m5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuhgm7uga15k2irecz2m5.png" alt="This simplified flow captures the core promotion logic in a state-machine-like manner. In practice, Kargo allows chaining stages, injecting validations, or even skipping promotion entirely based on custom logic." width="800" height="1213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧱 What Are Kargo’s Core Components?&lt;/strong&gt;&lt;br&gt;
If you’re new to Kargo, here’s a quick breakdown of its three building blocks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warehouse&lt;/strong&gt;：Watches an image repo, emits a Freight when a new tag appears.&lt;br&gt;
&lt;strong&gt;Freight&lt;/strong&gt;：Represents a specific image version with metadata.&lt;br&gt;
&lt;strong&gt;Stage&lt;/strong&gt;：Evaluates Freight and executes the promotion logic.&lt;/p&gt;

&lt;p&gt;Together, these power a declarative, Git-driven promotion engine.&lt;/p&gt;


&lt;h2&gt;
  
  
  5. Promotion Pipeline (Mermaid Diagram)
&lt;/h2&gt;

&lt;p&gt;Here’s the visual flow I used to design this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; DockerHub-&amp;gt;&amp;gt;Kargo Warehouse: New image tag pushed
 Kargo Warehouse-&amp;gt;&amp;gt;Freight: Create Freight
 Freight-&amp;gt;&amp;gt;Kargo Stage: Trigger promotion
 Kargo Stage-&amp;gt;&amp;gt;Git: Update instance.yaml → Commit → Push
 Git-&amp;gt;&amp;gt;Argo CD: Git change detected
 Argo CD-&amp;gt;&amp;gt;Kro: Pass updated instance.yaml
 Kro-&amp;gt;&amp;gt;Cluster: Render manifests and apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  6. How I Configure the Warehouse
&lt;/h2&gt;

&lt;p&gt;Kargo supports various selection strategies — &lt;code&gt;SemVer&lt;/code&gt;, &lt;code&gt;Lexical&lt;/code&gt;, &lt;code&gt;Newest&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I use &lt;strong&gt;SemVer&lt;/strong&gt;, and each service has its own Warehouse:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Polling Interval: Every 5 minutes&lt;/li&gt;
&lt;li&gt;One Warehouse per service → clear separation, no cross-talk&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  7. How I Define Stage Conditions
&lt;/h2&gt;

&lt;p&gt;Here’s the promotion pipeline I run in each Stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git-clone → yaml-parse → yaml-update → git-commit → git-push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All changes target a single file: &lt;code&gt;instance.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This makes the logic &lt;strong&gt;clear&lt;/strong&gt;, &lt;strong&gt;trackable&lt;/strong&gt;, and &lt;strong&gt;easy to debug&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can go further with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Promote only if tests pass&lt;/li&gt;
&lt;li&gt;✅ Require manual approval&lt;/li&gt;
&lt;li&gt;✅ Ensure health check before promotion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Stage is your &lt;strong&gt;promotion control tower&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. How &lt;code&gt;yaml-update&lt;/code&gt; Handles Tag Precision
&lt;/h2&gt;

&lt;p&gt;This step made everything cleaner:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;yaml-parse: read the original tag → store as oldTag&lt;/li&gt;
&lt;li&gt;yaml-update: set the latest tag&lt;/li&gt;
&lt;li&gt;git-commit: generate messages like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Promote image from 1.2.1 to 1.2.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- uses: yaml-update
  config:
    path: ./repo/develop/frontend/instance.yaml
    updates:
      - key: spec.values.deployment.tag
        value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  9. Why This Promotion Pipeline Is Reliable
&lt;/h2&gt;

&lt;p&gt;This setup works because:&lt;/p&gt;

&lt;p&gt;✅ Every change is committed → observable, reversible&lt;br&gt;
✅ Only one file is updated → minimal blast radius&lt;br&gt;
✅ No YAML desync → rollback = git revert&lt;br&gt;
✅ Service logic is isolated → safe parallel promotion&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📌 Rollback Plan&lt;/strong&gt;&lt;br&gt;
While I haven’t fully implemented rollback yet, the plan is already in place and aligns with the rest of the GitOps flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use previousTag() to fetch the last known version&lt;/li&gt;
&lt;li&gt;Write that tag into instance.yaml&lt;/li&gt;
&lt;li&gt;Commit → Push → Argo CD syncs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Eventually, I’ll create a &lt;code&gt;**rollback-stage**&lt;/code&gt;+ &lt;code&gt;**rollback-task**&lt;/code&gt;&lt;br&gt;
to make rollback a native GitOps operation — not a manual fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  10. What’s Next: Syncing Only the Right App
&lt;/h2&gt;

&lt;p&gt;After promotion, I don’t want to sync the entire namespace.&lt;/p&gt;

&lt;p&gt;I just want to sync the one app that changed.&lt;/p&gt;

&lt;p&gt;Kargo supports this with argocd-update and ApplicationSet annotations.&lt;/p&gt;

&lt;p&gt;In the next post, I’ll share:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How ApplicationSet works with Kargo Stage&lt;/li&gt;
&lt;li&gt;How to scope syncs using annotations&lt;/li&gt;
&lt;li&gt;How to stop “sync one = sync all”&lt;/li&gt;
&lt;li&gt;YAML examples for Warehouse, Stage, Freight&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  💬 Closing Thoughts: Promotion Can Be Clean
&lt;/h2&gt;

&lt;p&gt;This article isn’t about pasting YAML.&lt;/p&gt;

&lt;p&gt;It’s about building a promotion pipeline that is:&lt;/p&gt;

&lt;p&gt;✅ Condition-driven&lt;br&gt;
✅ Git-observable&lt;br&gt;
✅ Fully rollbackable&lt;/p&gt;

&lt;p&gt;Of course, promotion via CI or webhooks is totally valid — but I found Kargo’s declarative and Git-driven model a better fit for my goals.&lt;/p&gt;

&lt;p&gt;If you’re designing a GitOps system and wondering where promotion logic belongs,I hope this gave you a clear, maintainable path forward.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Your turn — how are you handling promotions in GitOps?&lt;/strong&gt;&lt;br&gt;
Drop a comment, share your approach, or let’s compare notes —&lt;br&gt;
we might be solving the same problem 🚀&lt;/p&gt;

</description>
      <category>gitops</category>
      <category>kargo</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Implementing a Modular Kargo Promotion Workflow: Extracting PromotionTask from Stage for Maintainability</title>
      <dc:creator>JosephCheng</dc:creator>
      <pubDate>Fri, 02 May 2025 08:34:29 +0000</pubDate>
      <link>https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi</link>
      <guid>https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi</guid>
      <description>&lt;p&gt;A fully operational GitOps-native upgrade pipeline, designed for reuse and scalability&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is Part 5 of my GitOps Architecture series.&lt;br&gt;&lt;br&gt;
This series was originally written and published in a linear progression (Part 1 to 6).&lt;br&gt;&lt;br&gt;
On Dev.to, I’m republishing it starting from the final system design (Part 6), then tracing backward to how it was built — from system to source.&lt;/em&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  🧱 A GitOps-Native Upgrade Pipeline That Scales
&lt;/h2&gt;

&lt;p&gt;This post introduces a &lt;strong&gt;fully operational GitOps-native promotion pipeline&lt;/strong&gt; — designed for long-term &lt;strong&gt;reusability&lt;/strong&gt;, &lt;strong&gt;modularity&lt;/strong&gt;, and &lt;strong&gt;automation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s not a concept — it runs in production.&lt;br&gt;&lt;br&gt;
And it became the foundation for my GitOps architecture.&lt;/p&gt;

&lt;p&gt;📌 &lt;em&gt;This is Part 5 of the series:&lt;/em&gt;&lt;br&gt;&lt;br&gt;
👉 ✅ Part 1: &lt;a href="https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88"&gt;Why Argo CD Wasn't Enough&lt;/a&gt;&lt;br&gt;&lt;br&gt;
👉 ✅ Part 2: &lt;a href="https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g"&gt;From Kro RGD to Full GitOps&lt;/a&gt;&lt;br&gt;&lt;br&gt;
👉 ✅ Part 3: &lt;a href="https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp"&gt;Designing a Maintainable GitOps Repo&lt;/a&gt;&lt;br&gt;&lt;br&gt;
👉 ✅ Part 4: &lt;a href="https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go"&gt;GitOps Promotion with Kargo &lt;/a&gt; &lt;/p&gt;

&lt;p&gt;👉 ✅ Part 6: &lt;a href="https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p"&gt;Designing a Maintainable GitOps Architecture&lt;/a&gt; (Start Here)&lt;/p&gt;


&lt;h2&gt;
  
  
  🚀 From Flowchart to Real-World Workflow
&lt;/h2&gt;

&lt;p&gt;In the first four parts, I started with the limitations of Argo CD and worked toward a deployment model centered around Kro and &lt;code&gt;instance.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In Part 4, I introduced a clean, Git-native promotion flow:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;Image Tag&lt;/code&gt; → &lt;code&gt;Git Commit&lt;/code&gt; → &lt;code&gt;Argo Sync&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But this post is where the system truly goes live.&lt;/p&gt;

&lt;p&gt;This time, I implemented Kargo’s three core components and stitched them into a fully working GitOps flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Warehouse&lt;/strong&gt;: Tracks image updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Stage&lt;/strong&gt;: Decides whether to promote&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PromotionTask&lt;/strong&gt;: Executes the update → commit → push → sync pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ApplicationSet + annotation&lt;/strong&gt;: Targets the correct ArgoCD app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then I refactored the logic for maintainability by extracting promotion steps into a shared &lt;strong&gt;PromotionTask&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  🏗 Warehouse — Per-Service Image Tracking
&lt;/h2&gt;

&lt;p&gt;Each service has its own &lt;code&gt;Warehouse&lt;/code&gt; to isolate update detection. &lt;br&gt;
This lets each service operate with its own frequency and tag rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kargo.akuity.io/v1alpha1
kind: Warehouse
metadata:
  name: frontend-dev-image
  namespace: develop
spec:
  freightCreationPolicy: Automatic
  interval: 5m0s
  subscriptions:
    - image:
        repoURL: docker.io/zxc204fghasd/wsp-web-server-v2
        imageSelectionStrategy: SemVer
        strictSemvers: true
        discoveryLimit: 20
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Stage – From Embedded Logic to Task Delegation
&lt;/h2&gt;

&lt;p&gt;I originally stuffed all promotion steps inside the Stage:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  annotations:
    kargo.akuity.io/color: lime
  name: frontend-dev-stage
  namespace: develop
spec:
  promotionTemplate:
    spec:
      steps:
      - uses: git-clone
        config:
          checkout:
          - branch: main
            path: ${{ vars.srcPath }}
          repoURL: ${{ vars.gitopsRepo }}
      - uses: yaml-parse
        config:
          outputs:
          - fromExpression: spec.values.deployment.tag
            name: oldTag
          path: ${{ vars.srcPath }}/develop/frontend/wsp-web-instance.yaml
      - as: update-image
        uses: yaml-update
        config:
          path: ${{ vars.srcPath }}/develop/frontend/wsp-web-instance.yaml
          updates:
          - key: spec.values.deployment.tag
            value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)).Tag) }}
      - as: commit
        uses: git-commit
        config:
          messageFromSteps:
          - update-image
          path: ${{ vars.outPath }}
      - uses: git-push
        config:
          path: ${{ vars.outPath }}
      - uses: argocd-update
        config:
          apps:
          - name: frontend-dev-app
            sources:
            - desiredRevision: ${{ outputs.commit.commit }}
              repoURL: ${{ vars.gitopsRepo }}
      vars:
      - name: gitopsRepo
        value: https://your.git/repo.git
      - name: imageRepo
        value: &amp;lt;your docker image repo&amp;gt;
      - name: srcPath
        value: ./repo
      - name: outPath
        value: ./repo
      - name: targetBranch
        value: main
      - name: warehouseName
        value: frontend-dev-image
  requestedFreight:
  - origin:
      kind: Warehouse
      name: frontend-dev-image
    sources:
      direct: true
      stages: []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It worked — but maintaining this became a nightmare.&lt;br&gt;
&lt;strong&gt;Maintaining this across services became a burden — any logic change meant updating multiple files by hand.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So I refactored.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage now focuses only on deciding when to promote&lt;/strong&gt; , and delegates promotion logic to a reusable PromotionTask:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kargo.akuity.io/v1alpha1
kind: Stage
metadata:
  name: frontend-dev-stage
  namespace: develop
  annotations:
    kargo.akuity.io/color: lime
spec:
  requestedFreight:
    - origin:
        kind: Warehouse
        name: frontend-dev-image
      sources:
        direct: true
        stages: []
  promotionTemplate:
    spec:
      steps:
        - task:
            name: promote-kro-instance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wasn’t just for readability.&lt;br&gt;
&lt;strong&gt;It was the only way to make promotions modular, reusable, and maintainable.&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  PromotionTask — The Core of Modular Promotion Logic
&lt;/h2&gt;

&lt;p&gt;PromotionTask contains all the steps needed to perform a promotion.&lt;/p&gt;

&lt;p&gt;To make it reusable, I parameterized the task using variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;gitopsRepo&lt;/code&gt;: Git repo URL with token
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;imageRepo&lt;/code&gt;: Docker image registry
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;instancePath&lt;/code&gt;: Path to the service's &lt;code&gt;instance.yaml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;warehouseName&lt;/code&gt;: Warehouse to track tags
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;appName&lt;/code&gt;: Argo CD app to sync after promotion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the full YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kargo.akuity.io/v1alpha1
kind: PromotionTask
metadata:
  name: promote-image
  namespace: develop
spec:
  vars:
    - name: gitopsRepo
      value: https://your.git/repo.git
    - name: imageRepo
      value: &amp;lt;your docker image repo&amp;gt;
    - name: instancePath
      value: develop/frontend/instance.yaml
    - name: warehouseName
      value: frontend-dev-image
    - name: appName
      value: frontend-dev-app
  steps:
    - uses: git-clone
      config:
        repoURL: ${{ vars.gitopsRepo }}
        checkout:
          - branch: main
            path: ./repo
    - uses: yaml-parse
      config:
        path: ./repo/${{ vars.instancePath }}
        outputs:
          - name: oldTag
            fromExpression: spec.values.deployment.tag
    - uses: yaml-update
      as: update-image
      config:
        path: ./repo/${{ vars.instancePath }}
        updates:
          - key: spec.values.deployment.tag
            value: ${{ quote(imageFrom(vars.imageRepo, warehouse(vars.warehouseName)).Tag) }}
    - uses: git-commit
      as: commit
      config:
        path: ./repo
        message: ${{ task.outputs['update-image'].commitMessage }}
    - uses: git-push
      config:
        path: ./repo
    - uses: argocd-update
      config:
        apps:
          - name: ${{ vars.appName }}
            sources:
              - repoURL: ${{ vars.gitopsRepo }}
                desiredRevision: ${{ task.outputs['commit'].commit }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Why Break Out Tasks? Because Templates Are the Long-Term Answer
&lt;/h2&gt;

&lt;p&gt;This wasn't about writing cleaner YAML.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;It was about making the system scale&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What happens when you have 10 services, all running similar — but slightly different — promotion flows?&lt;/p&gt;

&lt;p&gt;If you embed everything inside the Stage, you'll end up copy-pasting YAML everywhere.&lt;br&gt;&lt;br&gt;
When logic changes, you have to update every file manually.&lt;/p&gt;

&lt;p&gt;That doesn't work at scale.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;The answer is: extract the logic into a reusable, parameterized PromotionTask.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With this design:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stage handles the decision&lt;/strong&gt;: should we promote?
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Task handles the execution&lt;/strong&gt;: how do we promote?
&lt;/li&gt;
&lt;li&gt;The logic lives in one place, and can be reused across services &lt;/li&gt;
&lt;li&gt;A single task can power multiple services, just by changing a few variables
&lt;/li&gt;
&lt;li&gt;If one service needs extra validation, just fork the task — it won’t affect the rest&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is how you make promotion logic &lt;strong&gt;modular, maintainable, and scalable&lt;/strong&gt; — even when things get complicated.&lt;/p&gt;


&lt;h2&gt;
  
  
  Don’t Forget the Kargo Project — The Key to Automatic Promotion
&lt;/h2&gt;

&lt;p&gt;One often-overlooked piece when working with Kargo is the Project resource.&lt;/p&gt;

&lt;p&gt;Even if you’ve defined your Warehouse, Stage, and PromotionTask correctly,&lt;strong&gt;nothing will actually run unless you declare a Project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In my design, every namespace has its own Project.&lt;br&gt;
This separates environments (like &lt;code&gt;develop&lt;/code&gt; and &lt;code&gt;production&lt;/code&gt;) and scopes promotion logic clearly.&lt;/p&gt;

&lt;p&gt;Here’s the Project I defined for the &lt;code&gt;develop&lt;/code&gt; namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kargo.akuity.io/v1alpha1
kind: Project
metadata:
  name: develop
  namespace: develop
spec:
  promotionPolicies:
    - autoPromotionEnabled: true
      stage: frontend-dev-stage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup helped me achieve several things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defined a dedicated Project for the &lt;code&gt;develop&lt;/code&gt; environment, managing all resources in one place &lt;/li&gt;
&lt;li&gt;Enabled &lt;code&gt;autoPromotionEnabled: true&lt;/code&gt;, allowing Kargo to automatically push qualifying Freight to the appropriate Stage&lt;/li&gt;
&lt;li&gt;Made the entire flow — from image tag → Freight → Stage → Task — &lt;strong&gt;fully automated, with zero manual triggers&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚠️ Pitfall&lt;br&gt;
I ran into this myself.&lt;br&gt;
Freight was getting created just fine, but Stage wasn’t promoting.&lt;br&gt;
It turns out — I simply &lt;strong&gt;forgot to declare a Project&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without a Project, Kargo will not run any promotion logic — no matter how well your YAML is structured.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Pitfalls I Hit (and How I Fixed Them)
&lt;/h2&gt;

&lt;p&gt;These weren’t theoretical issues.&lt;br&gt;
&lt;strong&gt;These are all real problems I encountered — and fixed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;image.tag&lt;/code&gt; &lt;strong&gt;unknown&lt;/strong&gt;&lt;br&gt;
In my early implementation, I used an invalid variable name like image.tag inside yaml-parse, and the whole promotion failed.&lt;br&gt;
✅ Fix: Use values from Freight, or reference pre-defined variables via vars.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Freight was&lt;/strong&gt; &lt;code&gt;&amp;lt;nil&amp;gt;&lt;/code&gt;&lt;br&gt;
This often happened when the Warehouse configuration was wrong — usually a bad repoURL or a tag that couldn’t be parsed properly.&lt;br&gt;
✅ Fix: Make sure your image repo is correct, use SemVer, and always wrap expressions with quote().&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML file not found&lt;/strong&gt;&lt;br&gt;
My yaml-parse step couldn’t find the file—not because the file was missing, but because the git-clone path didn’t match.&lt;br&gt;
✅ Fix: Manually clone the repo to confirm the file structure, then double-check that your Stage uses the exact same path.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Git push failed (authentication)&lt;/strong&gt;&lt;br&gt;
I kept hitting auth errors on git-push. The actual reason? I forgot to include the GitLab token.&lt;br&gt;
✅ Fix: Inject the token directly in the gitopsRepo URL like this: https://@gitlab.com/....&lt;/p&gt;

&lt;p&gt;&lt;code&gt;argocd-update&lt;/code&gt; &lt;strong&gt;unauthorized&lt;/strong&gt;&lt;br&gt;
Even when promotion completed, the argocd-update step failed. Turned out the Application wasn’t authorized.&lt;br&gt;
✅ Fix: Add the correct kargo.akuity.io/authorized-stage annotation to the Application metadata.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts: From Design to Reality
&lt;/h2&gt;

&lt;p&gt;In this post, I made the promotion pipeline real.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I implemented the &lt;strong&gt;full Warehouse → Stage → PromotionTask chain&lt;/strong&gt; &lt;/li&gt;
&lt;li&gt;Designed promotion logic to be &lt;strong&gt;reusable and maintainable&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Built a system that doesn’t just run once, but &lt;strong&gt;runs day after day&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was the shift from “YAML that works” to “a system that survives change.”&lt;/p&gt;

&lt;p&gt;I haven’t added validation gates, PR mode, or Slack notifications yet — but I’ve left room for all of that.&lt;br&gt;
My Stage structure supports conditions, and my Tasks are ready to evolve.&lt;/p&gt;

&lt;p&gt;If you’ve solved promotion workflows in a different way, I’d love to hear how you approached it.&lt;br&gt;
Or if you’re currently stuck somewhere in your GitOps pipeline — feel free to drop a comment or message.&lt;br&gt;
&lt;strong&gt;Maybe we’re solving the same thing from different angles.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pitfalls I hit?&lt;br&gt;
&lt;strong&gt;I hope you don’t have to hit them too.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;💬 &lt;em&gt;If this post helped clarify how to modularize Kargo promotion logic, give it a ❤️ or drop a comment.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
I'm sharing more GitOps internals — &lt;strong&gt;next up is Part 4: the architectural reasoning behind this pipeline.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>gitops</category>
      <category>kargo</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How I Scaled My GitOps Promotion Flow into a Maintainable Architecture</title>
      <dc:creator>JosephCheng</dc:creator>
      <pubDate>Wed, 30 Apr 2025 07:25:09 +0000</pubDate>
      <link>https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p</link>
      <guid>https://dev.to/josephcc/how-i-scaled-my-gitops-promotion-flow-into-a-maintainable-architecture-j1p</guid>
      <description>&lt;p&gt;&lt;em&gt;A full-stack GitOps promotion flow architecture, designed for traceability, maintainability, and multi-service scalability.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;This is Part 6 — the final chapter — of my “Building a Real and Reliable GitOps Architecture” series.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;👉 Part 1: &lt;a href="https://dev.to/josephcc/why-argo-cd-wasnt-enough-real-gitops-pain-and-the-tools-that-fixed-it-5c88"&gt;Why Argo CD Wasn't Enough&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;👉 Part 2: &lt;a href="https://dev.to/josephcc/from-kro-rgd-to-full-gitops-how-i-built-a-clean-deployment-flow-with-argo-cd-409g"&gt;From Kro RGD to Full GitOps: How I Built a Clean Deployment Flow with Argo CD&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;👉 Part 3: &lt;a href="https://dev.to/josephcc/designing-a-maintainable-gitops-repo-structure-managing-multi-service-and-multi-env-with-argo-cd--1egp"&gt;Designing a Maintainable GitOps Repo Structure: Managing Multi-Service and Multi-Env with Argo CD + Kro&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;👉 Part 4: &lt;a href="https://dev.to/josephcc/gitops-promotion-with-kargo-image-tag-git-commit-argo-sync-39go"&gt;GitOps Promotion with Kargo: Image Tag → Git Commit → Argo Sync&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;👉 Part 5: &lt;a href="https://dev.to/josephcc/implementing-a-modular-kargo-promotion-workflow-extracting-promotiontask-from-stage-for-4npi"&gt;Implementing a Modular Kargo Promotion Workflow: Extracting PromotionTask from Stage for Maintainability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔔 &lt;strong&gt;Since this is the final article&lt;/strong&gt;, I’ll also wrap up with some reflections — and a glimpse into what’s coming next.&lt;/p&gt;

&lt;h2&gt;
  
  
  📖 Introduction: From Promotion Flow to GitOps System Architecture
&lt;/h2&gt;

&lt;p&gt;In Part 4, I introduced a basic promotion flow:&lt;br&gt;&lt;br&gt;
&lt;code&gt;image → commit → sync&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In Part 5, I implemented it with real components:&lt;br&gt;&lt;br&gt;
&lt;code&gt;Warehouse → Stage → Task&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;But in Part 6, I want to explore what happens when that flow gets dropped into a real environment—multiple services, long-term operation, growing complexity.&lt;/p&gt;

&lt;p&gt;I wasn't just trying to make it work once.&lt;br&gt;&lt;br&gt;
I wanted to design a system—&lt;br&gt;&lt;br&gt;
A system that's traceable, adaptable, resilient, and doesn't break when things change.&lt;/p&gt;

&lt;p&gt;In this article, I'll focus on the architectural mindset behind it—how I layered my GitOps design and modularized my promotion logic to build something that doesn't just work, but lasts.&lt;/p&gt;


&lt;h2&gt;
  
  
  🧱 Architecture Layering: How I Used root-app + ApplicationSet to Manage Multiple Services
&lt;/h2&gt;

&lt;p&gt;Just having a working promotion flow isn't enough.&lt;br&gt;&lt;br&gt;
If you want your system to survive long-term, across services and teams, you need a structure that scales, decouples, and supports independent debugging.&lt;/p&gt;

&lt;p&gt;I considered the common approach early on—&lt;br&gt;&lt;br&gt;
Creating a &lt;code&gt;dev-app&lt;/code&gt; / &lt;code&gt;prod-app&lt;/code&gt; as middle-layer Applications to manage services underneath.&lt;/p&gt;

&lt;p&gt;Looks neat in theory. But in practice? It caused two major issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hard to decouple promotion strategies&lt;/strong&gt; &lt;br&gt;
When services are bundled into one App, a single change (like a hotfix) can affect the whole package.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overcoupled behavior and risk&lt;/strong&gt; &lt;br&gt;
One sync failure could block multiple services. Permission scopes get blurry. Separation of concerns becomes a nightmare.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I went in the opposite direction—&lt;br&gt;&lt;br&gt;
I adopted a 3-layer GitOps architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root-app
  └── application-set (by namespace: develop / production)
        └── application (one per service: frontend-app / backend-app ...)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;root-app&lt;/strong&gt;: Manages overall structure; each namespace maps to an ApplicationSet
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ApplicationSets&lt;/strong&gt;: Scans Git folders to generate Argo CD Application
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application&lt;/strong&gt;: Deploys and promotes a single service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This structure addresses the three pain points I care about most:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Separation of promotion logic&lt;/strong&gt;: Each service can define its own release strategy — fully decoupled.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error isolation&lt;/strong&gt;: If one app sync fails, others continue working.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimal expansion cost&lt;/strong&gt;: To add a service, just push a new folder. The ApplicationSet auto-generates its Application.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 This isn't about having tidy YAML.&lt;br&gt;
It's about building a system that remains stable, transparent, and maintainable as it grows.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And this clear layering enabled me to cleanly separate tool responsibilities and modularize my upgrade logic later on.&lt;/p&gt;




&lt;h2&gt;
  
  
  ⚙️ Responsibility Decoupling: What Kro, Kargo, and Argo CD Actually Do
&lt;/h2&gt;

&lt;p&gt;In this architecture, each tool serves a clear purpose—not because they can't do more, but because separating responsibilities helps boundaries stay stable.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Kro: Declarative Templates + DAG-Defined Infrastructure&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Kro isn't just a YAML generator.&lt;br&gt;&lt;br&gt;
It defines DAG-based dependency graphs for each service.&lt;/p&gt;

&lt;p&gt;Each &lt;code&gt;ResourceGraphDefinition&lt;/code&gt; (RGD) is a complete service blueprint, modeled as a graph of interdependent components: Deployment, Service, ConfigMap, etc.&lt;/p&gt;

&lt;p&gt;Kro handles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validates the DAG structure to ensure no cycles or misconfigurations.&lt;/li&gt;
&lt;li&gt;Generates a CRD (like WebApplication) to manage services as Kubernetes-native resources.&lt;/li&gt;
&lt;li&gt;Applies and reconciles manifests in DAG order when instance.yaml changes.&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 Each service’s infra becomes a versionable, testable graph.&lt;br&gt;
Changes to subcomponents or dependency ordering become safe and composable — no more hand-editing YAML everywhere.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;More importantly, this DAG modeling defines the service architecture itself — making refactoring, extending, or validating services much easier and safer.&lt;/p&gt;

&lt;p&gt;Kro is not just my templating layer — it’s my system’s graph-based modeling layer, scaling logic without duplicating boilerplate.&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Kargo: Promotion Orchestration + Traceability&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Kargo orchestrates my upgrade process. Key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Warehouse&lt;/strong&gt;: Tracks image tags and filters them by SemVer or lexical rules. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stage&lt;/strong&gt;: Decides whether to promote, based on tag comparison or validations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;PromotionTask&lt;/strong&gt;: Executes the upgrade (update → commit → push → sync).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 Kargo makes promotion stateful, observable, and pluggable.&lt;br&gt;
It turns what would be a static pipeline into a logical graph of upgrade checkpoints.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With Kargo, I can easily insert manual approvals, CI checks, or security scans — without losing traceability or rollback control.&lt;/p&gt;


&lt;h3&gt;
  
  
  &lt;strong&gt;Argo CD: Git → Cluster Synchronization Only&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Argo CD’s role is the simplest — and purest.&lt;br&gt;
It syncs manifests (produced by Kro, updated by Kargo) from Git to the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No decisions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No tag handling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No manifest mutation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 Argo CD acts purely as the executor.&lt;br&gt;
It ensures state consistency while remaining easy to debug, scale, and replace.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;&lt;strong&gt;Why separate them this way?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Decoupling: Each logic layer can evolve independently.
&lt;/li&gt;
&lt;li&gt;✅ Observability: Every part of the flow is traceable.&lt;/li&gt;
&lt;li&gt;✅ Flexibility: Any tool can be swapped (Flux, GitLab CI, etc.) without a full rewrite.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;This isn't about stacking tools. It’s about building a system that’s composable, maintainable, and future-proof.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  ♻️ Promotion Flow Design: From One Line to a Traceable System
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Full promotion flow:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Image
  ↓
Warehouse
  ↓
Stage (Decision)
  ↓
PromotionTask (Execution)
  ↓
Git Commit
  ↓
Argo CD Sync
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Why each step matters:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Image → Warehouse&lt;/strong&gt;: I don’t let CI trigger upgrades. Warehouse watches the repo, giving full visibility into “what was published” and “what changed.”&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Warehouse → Stage&lt;/strong&gt;: Stage only decides if something should be promoted — based on SemVer comparison, webhook checks, or YAML diffs. Approval gates or CI validations can plug in here.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stage → PromotionTask&lt;/strong&gt;: PromotionTask contains the how, not just the what. Fully reusable and parameterized.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task → Git commit&lt;/strong&gt;: No direct applies. Every promotion creates a Git commit — for rollback, history, and reviewability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Git → Argo CD&lt;/strong&gt;: Argo CD syncs the commit. It’s the final executor — not part of the promotion decision.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 This isn’t just a working flow—it’s a &lt;strong&gt;traceable, modular, and extensible promotion lifecycle&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  🧩 Modular &amp;amp; Maintainable Design: Making Promotion Logic Reusable
&lt;/h2&gt;

&lt;p&gt;This is the part of the architecture I care most about.&lt;/p&gt;

&lt;p&gt;I don’t just want upgrade flows to work.&lt;br&gt;
I want them to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reusable&lt;/strong&gt;: No need to reimplement for each service.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Composable&lt;/strong&gt;: Easy to plug in validations, notifications, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safe to evolve&lt;/strong&gt;: Changes don’t break everything else.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  1️⃣ Stage ≠ Task: Split the Responsibilities
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Stage&lt;/code&gt;: should we promote?
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;PromotionTask&lt;/code&gt;: how do we promote?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why this matters: If you pack everything into Stage, you’ll copy-paste logic across every service. Change one line? Touch ten YAML files.&lt;/p&gt;


&lt;h3&gt;
  
  
  2️⃣ PromotionTask as Template: Parameterized, Reusable Logic
&lt;/h3&gt;

&lt;p&gt;I defined a standard PromotionTask and used variables to customize it per service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: kargo.akuity.io/v1alpha1
kind: PromotionTask
metadata:
  name: promote-image
  namespace: develop
spec:
  vars:
    - name: imageRepo
      value: docker.io/yourname/your-image
    - name: instancePath
      value: develop/your-service/instance.yaml
    - name: appName
      value: your-service-dev-app
  steps:
    - uses: git-clone
    - uses: yaml-parse
    - uses: yaml-update
    - uses: git-commit
    - uses: git-push
    - uses: argocd-update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;frontend and backend share the same promotion task — just different vars.&lt;/li&gt;
&lt;li&gt;Want to add validation? Copy the task, add a yaml-assert step.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 This isn’t just templating.&lt;br&gt;
It’s a modular framework for defining, evolving, and scaling promotion logic.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧨 What I Broke (and How I Fixed It)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Freight = &amp;lt;nil&amp;gt;&lt;/code&gt;: Image tag parsing failed → Switched to SemVer + quote().&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Git push failed: Forgot token → Moved token into vars, fully automated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Argo CD sync error: App missing authorized-stage → Fixed by adding annotation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Kro &lt;code&gt;instance.yaml&lt;/code&gt; missing: Wrong git-clone path → Reorganized repo structure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Sync chaos →: Multiple services under one App → Split into independent Apps.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 These mistakes weren’t bugs to patch.&lt;br&gt;
They were lessons to harden the system’s foundations.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🧭 Why This System Was Worth Building
&lt;/h2&gt;

&lt;p&gt;Not built for a demo.&lt;br&gt;&lt;br&gt;
Built to &lt;strong&gt;survive&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Real systems need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traceability
&lt;/li&gt;
&lt;li&gt;Rollback
&lt;/li&gt;
&lt;li&gt;Validation
&lt;/li&gt;
&lt;li&gt;Modularity&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;📌 These are the price of long-term maintenance, not bonuses.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  🔚 Final Thoughts: This Isn’t a Toolchain—It’s a System That Survives
&lt;/h2&gt;

&lt;p&gt;It doesn’t just run.&lt;br&gt;&lt;br&gt;
It &lt;strong&gt;can be maintained&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s not rigid.&lt;br&gt;&lt;br&gt;
It &lt;strong&gt;evolves&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It’s not hardcoded.&lt;br&gt;&lt;br&gt;
It’s a &lt;strong&gt;modular, auditable system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It wasn’t enough to make the flow work once.&lt;br&gt;
It had to survive growth, adapt to change, and stay traceable across services and teams.&lt;/p&gt;

&lt;p&gt;If you’re designing your own GitOps setup, don’t just pick tools — design for resilience. Build a system that grows with you.&lt;/p&gt;




&lt;h2&gt;
  
  
  📚 Wrapping Up This Series—See You in the Next One
&lt;/h2&gt;

&lt;p&gt;This marks the final post in my &lt;strong&gt;“Designing a Real, Reliable GitOps Architecture”&lt;/strong&gt; series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next up:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
A brand-new series on building an &lt;strong&gt;MLOps architecture&lt;/strong&gt; with &lt;strong&gt;MicroK8s&lt;/strong&gt;, &lt;strong&gt;MLflow&lt;/strong&gt;, &lt;strong&gt;Kubeflow&lt;/strong&gt;, and &lt;strong&gt;vLLM&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you're into model versioning, serving, and full-lifecycle MLOps—stay tuned.&lt;br&gt;&lt;br&gt;
👋 See you in the next series.&lt;/p&gt;

&lt;p&gt;If you’re also building out your own GitOps system, I’d love to hear what approaches you’re trying.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>gitops</category>
      <category>kubernetes</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
