<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Cloud Native Open Source</title>
    <description>The latest articles on DEV Community by Cloud Native Open Source (@cloudnativeos).</description>
    <link>https://dev.to/cloudnativeos</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/cloudnativeos"/>
    <language>en</language>
    <item>
      <title>End to end argo-workflow for CI/CD</title>
      <dc:creator>Afzal Ansari</dc:creator>
      <pubDate>Wed, 14 May 2025 14:24:40 +0000</pubDate>
      <link>https://dev.to/cloudnativeos/end-to-end-argo-workflow-for-continuous-integration-5f8b</link>
      <guid>https://dev.to/cloudnativeos/end-to-end-argo-workflow-for-continuous-integration-5f8b</guid>
      <description>&lt;p&gt;If you're just getting started with GitOps or CI/CD pipelines in Kubernetes, Argo Workflows offers a powerful and Kubernetes-native way to automate your build pipelines.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk through a complete Continuous Integration (CI) and Continuous Deployment (CD) pipeline using Argo Workflows that performs the following steps:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrpy9wlr37dq9o79w1tm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrpy9wlr37dq9o79w1tm.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔁 &lt;strong&gt;The Pipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;✅ Clone a GitHub repository&lt;br&gt;
🔧 Build and push a Docker image using BuildKit&lt;br&gt;
🔐 Scan the Docker image for vulnerabilities&lt;br&gt;
🔍 Scan Kubernetes manifests for misconfigurations&lt;br&gt;
⏭ Deploy Kubernetes manifests to your cluster&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s break down each part of the workflow. But, before doing so, let's first understand basics of my argo workflow template.&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;Understanding the Argo Workflow Structure&lt;/strong&gt;&lt;br&gt;
Argo Workflows is a Kubernetes-native workflow engine where each CI/CD process is defined as a custom resource (Workflow or WorkflowTemplate). In my case, I used a WorkflowTemplate named ci-build-workflow, which can be triggered manually or automatically through a CronWorkflow.&lt;/p&gt;

&lt;p&gt;The workflow is defined using Argo’s CRD (Custom Resource Definition), and it includes metadata such as name or generateName. This value is used as a prefix for naming the pods in which each workflow step (or template) runs.&lt;/p&gt;

&lt;p&gt;📦 &lt;strong&gt;Volume Handling&lt;/strong&gt;&lt;br&gt;
Just like regular Kubernetes pods, you can declare volumes in Argo workflows. These volumes—like work or buildkitd—are mounted across different steps, enabling shared storage between containers. This is especially useful for tasks that rely on common directories (like cloning a repo and building from it).&lt;/p&gt;

&lt;p&gt;🔁 &lt;strong&gt;Parameterized Arguments&lt;/strong&gt;&lt;br&gt;
Argo allows the use of parameters in workflows to make them dynamic. In my CI workflow, I’ve defined parameters such as the GitHub repo owner, repo name, Docker image tag, registry name, etc. These parameters can be passed into the workflow during runtime or set statically in a CronWorkflow.&lt;/p&gt;

&lt;p&gt;🧩 &lt;strong&gt;Workflow Entry Point: DAG Template&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: main
  dag:
    tasks:
      - name: clone-repo
        ...
      - name: build-image
        ...
        depends: clone-repo
      - name: scan-image
        ...
        depends: build-image
      - name: scan-k8s
        ...
        depends: clone-repo &amp;amp;&amp;amp; build-image
     - name: deploy-kubernetes
       ...
       depends: build-image &amp;amp;&amp;amp; scan-k8s &amp;amp;&amp;amp; clone-repo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The pipeline uses a DAG (Directed Acyclic Graph) to run tasks in order:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;scan-k8s&lt;/code&gt; and &lt;code&gt;build-image&lt;/code&gt; both run after cloning&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;scan-image&lt;/code&gt; runs after the image is built&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;deploy-kubernetes&lt;/code&gt; runs after the image is built, k8s manifests scanned and cloning a repo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup ensures tasks are run in the right order without unnecessary blocking.&lt;/p&gt;

&lt;p&gt;📂 &lt;strong&gt;Volumes &amp;amp; Storage&lt;/strong&gt;&lt;br&gt;
The workflow uses a shared &lt;code&gt;Persistent Volume Claim&lt;/code&gt; (PVC) to exchange files between steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumeClaimTemplates:
- metadata:
    name: work
  spec:
    accessModes: ["ReadWriteOnce"]
    resources:
      requests:
        storage: 64Mi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, &lt;code&gt;BuildKit&lt;/code&gt; uses an &lt;code&gt;emptyDir&lt;/code&gt; volume for its internal state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
  - name: buildkitd
    emptyDir: {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before we can build and push container images to a Docker registry like Docker Hub, we need to authenticate. Argo Workflows uses Kubernetes secrets to securely store and access these credentials during execution.&lt;/p&gt;

&lt;p&gt;In our workflow, the build and push steps rely on credentials to authenticate with Docker Hub:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumes:
- name: docker-creds
  secret:
    secretName: '{{inputs.parameters.docker_secret_name}}'
    items:
    - key: .dockerconfigjson
      path: config.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This volume mount makes your Docker config file available inside the container at runtime, which is required for tools like BuildKit to push images securely.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;How to Create the Docker Secret:&lt;/em&gt;&lt;br&gt;
To generate the .dockerconfigjson file, you can create the Kubernetes secret using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic docker-config \
  --from-file=.dockerconfigjson=$HOME/.docker/config.json \
  --type=kubernetes.io/dockerconfigjson

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the workflow, we refer to this secret using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: docker_secret_name
  value: docker-config
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the secret in place, your workflow can securely authenticate and push the built image to Docker Hub, completing the CI loop without exposing any sensitive information.&lt;/p&gt;

&lt;p&gt;Now, let's understand every template involved in this case.&lt;/p&gt;

&lt;p&gt;📦 &lt;strong&gt;Step 1: Cloning the Repository&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: clone-repo
  template: clone
  arguments:
    parameters:
    - name: owner
      value: cloud-hacks
    - name: repo
      value: argocd-io
    - name: ref
      value: main
    - name: clone_path
      value: /work
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're using the official &lt;code&gt;alpine/git&lt;/code&gt; image to shallow-clone a GitHub repository. It stores the code in a shared volume (/work) so later steps like build and scan can use it.&lt;/p&gt;

&lt;p&gt;🛠️ &lt;strong&gt;Step 2: Building and Pushing the Docker Image&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: build-image
  template: build-image
  arguments:
    parameters:
    - name: image
      value: example/nginx
    - name: path
      value: .
    - name: version
      value: v4
    - name: registry
      value: docker.io
    - name: docker_secret_name
      value: docker-config
    - name: insecure
      value: "false"
  depends: clone-repo

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're using the rootless BuildKit container (moby/buildkit) to build the Docker image. It reads the source from /work, builds the image, and is set up to push the image (when &lt;code&gt;push=true&lt;/code&gt; is enabled in &lt;code&gt;--output&lt;/code&gt;). We also provide:&lt;/p&gt;

&lt;p&gt;The Docker image name and version&lt;br&gt;
Docker registry credentials via a Kubernetes secret&lt;br&gt;
Secure or insecure registry flag&lt;br&gt;
Note: In this template, &lt;code&gt;push=false&lt;/code&gt; is set — if you want to push, change it to &lt;code&gt;push=true&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;🐛 &lt;strong&gt;Step 3: Scanning the Image with Trivy&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: scan-image
  template: scan-image
  arguments:
    parameters:
    - name: image
      value: example/nginx:v4
    - name: severity
      value: CRITICAL,HIGH
    - name: exit-code
      value: "0"
  depends: build-image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we use Trivy to scan the Docker image (example/nginx:v4) for known vulnerabilities. This helps catch issues before the image is deployed.&lt;/p&gt;

&lt;p&gt;We scan for CRITICAL and HIGH severity vulnerabilities&lt;br&gt;
&lt;code&gt;exit-code: 0&lt;/code&gt; ensures the workflow doesn't fail even if vulnerabilities are found (customize this as needed)&lt;br&gt;
Trivy pulls the image from Docker Hub, so make sure the build-image step pushes the image first.&lt;/p&gt;

&lt;p&gt;🛡 &lt;strong&gt;Step 4: Scanning Kubernetes Manifests with Kubescape&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: scan-k8s
  template: scan-k8s
  arguments:
    parameters:
    - name: path
      value: /work/dev
    - name: verbose
      value: "true"
  depends: clone-repo &amp;amp;&amp;amp; build-image

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’re using &lt;code&gt;kubescape&lt;/code&gt; to scan the Kubernetes YAML files located in &lt;code&gt;/work/dev&lt;/code&gt; for misconfigurations, policy violations, and security issues. It helps ensure that the manifests follow best practices.&lt;/p&gt;

&lt;p&gt;⚙️ &lt;strong&gt;Step 5: Deploy Kubernetes manifests using Kubectl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: deploy-kubernetes
  inputs:
    parameters:
      - name: path
      - name: namespace
  container:
    image: bitnami/kubectl:latest
    command: [sh, -c]
    args:
      - |
        echo "Deploying Kubernetes resources from {{inputs.parameters.path}}..."
        kubectl apply -f {{inputs.parameters.path}} -n {{inputs.parameters.namespace}}
    volumeMounts:
      - mountPath: /work
        name: work

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use official &lt;code&gt;bitnami/kubectl&lt;/code&gt; image. It expects a path (like &lt;code&gt;/work/dev&lt;/code&gt;) where my Kubernetes manifests are located.&lt;br&gt;
The container mounts the shared work volume that was used in the clone step, ensuring that the files are accessible.&lt;br&gt;
Once executed, it runs &lt;code&gt;kubectl apply&lt;/code&gt; on the provided path to deploy the resources.&lt;/p&gt;

&lt;p&gt;🔒 &lt;strong&gt;Note&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Make sure that your Argo Workflow controller has the correct RBAC permissions to interact with Kubernetes resources (like pods, deployments, services, etc.) in the namespace where you intend to deploy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here's what we achieved so far using Argo Workflows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stage             Tool          Purpose
Clone Repo    alpine/git    Fetch source code
Build Image   BuildKit  Build Docker image in a
                                secure way
Scan Image    Trivy         Identify vulnerabilities in           
                         Docker image
Scan Manifests  Kubescape   Catch Kubernetes YAML issues
Deploy Kubernetes  Kubectl      Deploy kubernetes resources 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🕒 &lt;strong&gt;Automating CI with CronWorkflow&lt;/strong&gt;&lt;br&gt;
To make the CI process completely hands-off, I added a CronWorkflow that runs every Tuesday at 9 AM UTC. This means our CI pipeline automatically triggers once a week without needing any manual input.&lt;/p&gt;

&lt;p&gt;This is particularly useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically building and scanning your base images weekly.&lt;/li&gt;
&lt;li&gt;Ensuring your Kubernetes manifests stay compliant.&lt;/li&gt;
&lt;li&gt;Catching vulnerabilities on a routine basis, even if there are no recent code changes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what the CronWorkflow spec looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;spec:
  schedule: "0 9 * * 2"  # Every Tuesday at 9 AM UTC
  timezone: "UTC"
  concurrencyPolicy: "Replace"  # If the previous run is still running, replace it
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  workflowSpec:
    workflowTemplateRef:
      name: ci-build-workflow

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this in place, the entire CI process—from cloning the repo, building and scanning the image, to pushing it—is performed weekly without requiring any developer to trigger the pipeline.&lt;/p&gt;

&lt;p&gt;As with all emerging tools out there, everything has to start somewhere and this is also the case with the Argo dashboard. It is minimal but does what it needs to do. Argo shows all the workflows and their steps, it updates automatically and all progress and logs can be viewed from here. This makes it very easy to monitor how everything is going.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmero0n6d0sdytx6gws3q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmero0n6d0sdytx6gws3q.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v1jfqok9w906dqla3y0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v1jfqok9w906dqla3y0.png" alt="Image description" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Find the complete code and configuration for this setup on GitHub:&lt;br&gt;
GitHub Repository Link CI-build-Workflow: &lt;a href="https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-ci-workflow.yaml" rel="noopener noreferrer"&gt;https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-ci-workflow.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub Repository Link CronJob Example: &lt;a href="https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-cronjob-ci.yaml" rel="noopener noreferrer"&gt;https://github.com/Cloud-Hacks/argo-wf/blob/main/quick-start/wf-cronjob-ci.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://argo-workflows.readthedocs.io/en/latest/walk-through/dag/" rel="noopener noreferrer"&gt;https://argo-workflows.readthedocs.io/en/latest/walk-through/dag/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🌟 Let’s Connect!&lt;br&gt;
I love sharing insights about DevOps, Kubernetes, and GitOps tools like ArgoCD. If you found this article helpful or have questions, let’s continue the conversation on LinkedIn!&lt;br&gt;
👉 Connect with me on &lt;a href="http://linkedin.com/in/afzalansari07/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>argoworkflow</category>
      <category>argo</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
