Setting up a self-hosted multi-environment GitOps workflow using Argo CD, Kustomize, and cert-manager on MicroK8s: architecture, challenges, mistakes, and Kubernetes deployment automation.
π Context: Why this project?
This project is the logical continuation of a process started several months earlier. Initially, my infrastructure was managed manually: a few Kubernetes files stored in a repo, an Ingress to manage incoming traffic, a Letβs Encrypt certificate, a deployment exposed internally via a ClusterIP service, and a NodePort for external access.
But this approach quickly showed its limits:
- No multi-environment management
- No overall view of the cluster's state
- Updates via pipelines were limited (only resource creation or update, with the rest done manually), and no clear history
I wanted a more robust setup, closer to whatβs done in real companies: full infrastructure versioning, multi-env deployment, GitOps, more complete monitoring (traces, logs), and security.
π Technical Goals
This new project aims to build a solid and maintainable foundation to deploy any application in a self-hosted Kubernetes cluster. The pillars:
- GitOps with Argo CD for declarative deployments
- Multi-environment (dev, staging, prod...) using kustomize
- TLS Certificates automatically managed by cert-manager
- NGINX Ingress for public HTTPS exposure
- Full infra versioning: Ingress, Cert-Manager, and Argo CD are installed via Helm, but managed with Kustomize
- One namespace per environment
π Tech Stack
- MicroK8s: lightweight single-node Kubernetes distribution
- Helm: chart manager for installing tools like Argo CD, cert-manager, ingress-nginx
- Kustomize: overlays for different environments (dev, prod)
- Argo CD: GitOps engine for continuous deployment
- cert-manager + ClusterIssuer Letβs Encrypt: public TLS certificates
π Repo Structure
The repository is structured as follows:
.
βββ environments/
β   βββ dev/         # Development environment
β   βββ staging/     # Pre-production environment
β   βββ prod/        # Simulated production
βββ base/            # K8s components common to all environments
Each tool is installed via Helm with a versioned values.yaml, and the dev/, staging/, and prod/ environments are Kustomize overlays with targeted patches.
Repo available here: github.com/Wooulf/devops-bootcamp-ippon
π¦ Multi-Environment Management with Kustomize
Kustomize is the tool I use to cleanly manage different environments (dev, staging, prod) from a shared base of Kubernetes resources. Unlike Helm, it doesnβt rely on an external templating engine. Itβs about composing declarative files rather than generating them dynamically.
Each environment inherits common resources from base/, and applies environment-specific patches (e.g., domain name, Docker image, namespace...).
Example:
.
βββ base/
β   βββ portfolio/
β       βββ kustomization.yaml
β       βββ deployment.yaml
β       βββ service.yaml
β       βββ ingress.yaml
βββ environments/
β   βββ dev/
β   β   βββ kustomization.yaml
β   β   βββ patch-ingress-host.json
β   βββ staging/
β   βββ prod/
Each directory (base or environment-specific like dev, staging, or prod) contains a mandatory kustomization.yaml file. This file defines which Kubernetes resources to compose and which environment-specific modifications to apply (e.g., a JSON patch to change the Ingress host).
Example:
environments/dev/kustomization.yaml:
namespace: dev
resources:
  - ../../base/portfolio
patches:
  - path: patch-ingress-host.json
    target:
      kind: Ingress
      name: portfolio-ingress
patch-ingress-host.json:
[
  { "op": "replace", "path": "/spec/tls/0/hosts/0", "value": "dev.woulf.fr" },
  { "op": "replace", "path": "/spec/rules/0/host", "value": "dev.woulf.fr" }
]
π With this structure, I can deploy the same project in multiple environments by only changing context and a few target variables.
π Quick tip: to validate the local Kustomize output before handing over to Argo CD:
kubectl kustomize environments/dev
This generates the final YAML as it would be applied in the cluster.You can then apply it using:
kubectl apply -k environments/dev
π Why I Use Argo CD
Argo CD is the heart of my GitOps strategy: it ensures the actual Kubernetes cluster state always matches the manifests in Git.
What I like about Argo CD:
- Clear visual interface of deployment states
- Automatic (pull-based) deployment on commit
- History tracking, rollback support
- Automatic drift detection
- Easy targeting of specific environments/namespaces
βοΈ Installing Argo CD with HTTPS
Argo CD was installed via Helm in the argocd namespace, with this configuration:
global:
  domain: argocd.woulf.fr
configs:
  params:
    server.insecure: "true"
server:
  certificate:
    enabled: true
    secretName: argocd-tls
    domain: argocd.woulf.fr
    issuer:
      group: cert-manager.io
      kind: ClusterIssuer
      name: letsencrypt-prod
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
    hosts:
      - argocd.woulf.fr
    tls:
      - hosts:
          - argocd.woulf.fr
        secretName: argocd-tls
Why server.insecure: true? Because TLS termination is handled at the Ingress level. Internal traffic remains HTTP, which is acceptable in a local or single-tenant VPS cluster.
π Public Access via Ingress
The TLS certificate is automatically generated by cert-manager from the letsencrypt-prod ClusterIssuer. Argo CD is now publicly available at https://argocd.woulf.fr.
π Defining a GitOps Deployment with Argo CD
To connect Argo CD to my Git repo and define what to sync, I created an Application Kubernetes resource:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: portfolio-dev
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/Wooulf/devops-bootcamp-ippon
    targetRevision: HEAD
    path: environments/dev
  destination:
    server: https://kubernetes.default.svc
    namespace: dev
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
π The Argo CD Application file is also versioned in my Git repo: argocd/applications/portfolio-dev.yaml
π How It Works
- source.repoURL + path + targetRevision 
 Argo CD watches the- environments/devdirectory of the repo- https://github.com/Wooulf/devops-bootcamp-ippon. Any modification triggers an automatic sync.
- destination 
 The app is deployed in the local cluster (MicroK8s) using- https://kubernetes.default.svc, in the- devnamespace.
- 
syncPolicy.automated - 
automated: syncs automatically without manual actions
- 
prune: removes resources no longer defined in Git
- 
selfHeal: restores resources changed manually in the cluster
 
- 
- syncOptions.CreateNamespace=true 
 Automatically creates the- devnamespace if it doesnβt exist, making the deployment autonomous and idempotent.
π Final Result
- Fully automated GitOps deployment in devenvironment
- Continuous compliance between Git and the cluster
- Resource updates and deletions only controlled via Git
- Dynamic namespace creation without manual preconfiguration
The dev portfolio is now auto-deployed with the latest Docker image tagged latest, and publicly accessible at https://dev.woulf.fr.
π― Argo CD interface: deployment state and sync across dev, staging, and prod environments.
π Detailed view of the portfolio-dev app in the dev namespace β every resource is tracked, versioned, and synced to Git.
𧨠Problems Encountered (and Solved)
- 
Persistent self-signed certificates
I had persistent invalid certificates due to two simultaneous creation mechanisms: an annotation cert-manager.io/cluster-issueron the Ingress and aserver.certificateconfig in Helm'svalues.yaml. π Solution: keep only one source of truth, Helm config withserver.certificatein this case.
Β
- Temporary certificate active after Letβs Encrypt provisioning Cert-manager sometimes installs a temporary cert before Letβs Encrypt issues the final one. π Just wait, this is expected behavior.
Β
- MicroK8s βlosingβ add-ons (Helm, DNS, etc.) after reboot Some MicroK8s add-ons were disabled after reboot. π Required to manually restart certain services.
Β
- ClusterIssuer missing after cluster reboot After reboot, my ClusterIssuer wasnβt recognized, I had forgotten to version it in GitOps at the beginning. π Solved by adding it explicitly in the Argo CD-managed manifests.
Β
- HTTPS access errors despite seemingly correct config In my case, caused by an active temporary certificate, or bad TLS termination on Ingress. π Solved by handling TLS only at the Ingress level.
π§Ύ Summary
In this article, I started transitioning toward a multi-environment GitOps infrastructure, with:
- Installing Argo CD via Helm in a dedicated namespace
- Managing HTTPS with cert-manager and a Letβs Encrypt ClusterIssuer
- Resolving common certificate issues (self-signed, temporary, double creation)
- Exposing Argo CD via HTTPS using NGINX Ingress
- First GitOps deployment of an app (portfolio) in thedevenvironment, dynamically patched viakustomize
This technical foundation now allows me to test, version, and secure deployments just like in production, while keeping maximum infrastructure control.
π Coming in the Next Article
Weβll move forward by integrating Argo CD Image Updater to automatically update deployed images, adding a security layer with SealedSecrets to protect the API token used to interact with Argo CD, and making the system fully autonomous in true GitOps fashion.
 
 
              


 
    
Top comments (0)