Introduction
GitOps is the practice of using Git as the single source of truth for your infrastructure and application configuration. ArgoCD is the most widely adopted GitOps operator for Kubernetes, and for good reason - it watches your Git repositories and automatically reconciles your cluster state to match what is defined in your manifests.
But installing ArgoCD is the easy part. The hard part is structuring your repositories, managing multi-environment deployments, implementing progressive delivery, and setting up proper RBAC so your platform team does not become a bottleneck. This guide covers all of that with production-tested patterns.
Installing and Configuring ArgoCD
Start with a production-ready ArgoCD installation using the HA manifest:
# Create namespace
kubectl create namespace argocd
# Install ArgoCD HA (recommended for production)
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/ha/install.yaml
# Wait for all pods to be ready
kubectl wait --for=condition=ready pod -l app.kubernetes.io/part-of=argocd -n argocd --timeout=300s
# Get initial admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Expose ArgoCD via an Ingress (assuming you have an ingress controller and cert-manager):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server
namespace: argocd
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- hosts:
- argocd.yourcompany.com
secretName: argocd-tls
rules:
- host: argocd.yourcompany.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 443
Configure ArgoCD to connect to your Git repositories. For private repos, use SSH deploy keys:
argocd repo add git@github.com:yourorg/k8s-manifests.git \
--ssh-private-key-path ~/.ssh/argocd_deploy_key
The App-of-Apps Pattern
The app-of-apps pattern is the standard way to manage multiple ArgoCD applications declaratively. Instead of manually creating each Application resource through the UI or CLI, you define a single root Application that points to a directory of Application manifests.
Repository structure:
k8s-manifests/
├── apps/ # Root app-of-apps directory
│ ├── api.yaml # Application manifest for API service
│ ├── frontend.yaml # Application manifest for frontend
│ ├── worker.yaml # Application manifest for worker
│ ├── redis.yaml # Application manifest for Redis
│ └── monitoring.yaml # Application manifest for monitoring stack
├── services/
│ ├── api/
│ │ ├── base/
│ │ │ ├── deployment.yaml
│ │ │ ├── service.yaml
│ │ │ └── kustomization.yaml
│ │ └── overlays/
│ │ ├── staging/
│ │ │ └── kustomization.yaml
│ │ └── production/
│ │ └── kustomization.yaml
│ ├── frontend/
│ │ └── ...
│ └── worker/
│ └── ...
└── infrastructure/
├── redis/
└── monitoring/
The root application:
# root-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: argocd
spec:
project: default
source:
repoURL: git@github.com:yourorg/k8s-manifests.git
targetRevision: main
path: apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
An individual app manifest within the apps/ directory:
# apps/api.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "2"
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: git@github.com:yourorg/k8s-manifests.git
targetRevision: main
path: services/api/overlays/production
destination:
server: https://kubernetes.default.svc
namespace: production
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
When you add a new service, you just add a new YAML file to the apps/ directory and push to Git. ArgoCD picks it up automatically.
Sync Waves and Resource Ordering
Sync waves control the order in which ArgoCD applies resources. This is critical when you have dependencies - you need namespaces before deployments, CRDs before custom resources, and databases before applications.
# Wave -1: Namespaces and CRDs first
apiVersion: v1
kind: Namespace
metadata:
name: production
annotations:
argocd.argoproj.io/sync-wave: "-1"
---
# Wave 0: Infrastructure (databases, caches)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: redis
annotations:
argocd.argoproj.io/sync-wave: "0"
spec:
# ... Redis application config
---
# Wave 1: Shared services (service mesh, secrets)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: external-secrets
annotations:
argocd.argoproj.io/sync-wave: "1"
spec:
# ... External Secrets Operator config
---
# Wave 2: Application services
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: api
annotations:
argocd.argoproj.io/sync-wave: "2"
spec:
# ... API application config
Combine sync waves with resource hooks for even finer control:
# Run database migration before deploying new version
apiVersion: batch/v1
kind: Job
metadata:
name: db-migrate
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
template:
spec:
containers:
- name: migrate
image: yourorg/api:latest
command: ["node", "migrate.js"]
restartPolicy: Never
backoffLimit: 3
Progressive Delivery with Argo Rollouts
ArgoCD handles syncing manifests to your cluster, but it does not manage how traffic shifts to new versions. That is where Argo Rollouts comes in. It replaces the standard Kubernetes Deployment with a Rollout resource that supports canary and blue-green deployment strategies.
Install Argo Rollouts:
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml
A canary rollout with automated analysis:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: api
namespace: production
spec:
replicas: 5
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: yourorg/api:v2.1.0
ports:
- containerPort: 8080
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
strategy:
canary:
canaryService: api-canary
stableService: api-stable
trafficRouting:
nginx:
stableIngress: api-ingress
steps:
- setWeight: 10
- pause: { duration: 5m }
- analysis:
templates:
- templateName: success-rate
- setWeight: 30
- pause: { duration: 5m }
- analysis:
templates:
- templateName: success-rate
- setWeight: 60
- pause: { duration: 5m }
- setWeight: 100
The analysis template that gates each promotion step:
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
name: success-rate
spec:
metrics:
- name: success-rate
interval: 60s
count: 5
successCondition: result[0] >= 0.99
failureLimit: 2
provider:
prometheus:
address: http://prometheus.monitoring:9090
query: |
sum(rate(http_requests_total{app="api",status=~"2.."}[5m]))
/
sum(rate(http_requests_total{app="api"}[5m]))
If the success rate drops below 99% during any analysis phase, Argo Rollouts automatically rolls back to the stable version. No human intervention required at 3 AM.
RBAC and Multi-Tenancy
For teams with multiple projects or environments, ArgoCD's RBAC system controls who can see and sync what. Define projects to create boundaries:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: team-payments
namespace: argocd
spec:
description: "Payments team applications"
sourceRepos:
- 'git@github.com:yourorg/payments-*'
destinations:
- namespace: 'payments-*'
server: https://kubernetes.default.svc
clusterResourceWhitelist:
- group: ''
kind: Namespace
namespaceResourceWhitelist:
- group: '*'
kind: '*'
roles:
- name: developer
description: "Payments team developers"
policies:
- p, proj:team-payments:developer, applications, get, team-payments/*, allow
- p, proj:team-payments:developer, applications, sync, team-payments/*, allow
groups:
- payments-team # Maps to SSO group
Configure SSO integration (Dex with GitHub example):
# argocd-cm ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
url: https://argocd.yourcompany.com
dex.config: |
connectors:
- type: github
id: github
name: GitHub
config:
clientID: $dex.github.clientID
clientSecret: $dex.github.clientSecret
orgs:
- name: yourorg
Repository Structure Best Practices
After working with dozens of ArgoCD deployments, here are the patterns that hold up:
Separate app manifests from app source code. Keep your Kubernetes manifests in a dedicated repository, not alongside your application code. This gives you independent versioning, cleaner git history, and prevents application CI from triggering ArgoCD syncs.
Use Kustomize overlays for environments. Do not duplicate manifests for staging and production. Use a base with overlays:
# services/api/overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- patch: |-
- op: replace
path: /spec/replicas
value: 5
target:
kind: Deployment
name: api
images:
- name: yourorg/api
newTag: v2.1.0 # Updated by CI pipeline
Automate image tag updates. Your application CI pipeline should update the image tag in the manifests repo after a successful build. Use kustomize edit set image or a tool like ArgoCD Image Updater:
# In your application CI pipeline (GitHub Actions example)
- name: Update manifest repo
run: |
git clone git@github.com:yourorg/k8s-manifests.git
cd k8s-manifests/services/api/overlays/production
kustomize edit set image yourorg/api=yourorg/api:${{ github.sha }}
git add .
git commit -m "Update api image to ${{ github.sha }}"
git push
Need Help with Your DevOps?
Implementing GitOps with ArgoCD properly - from repository structure to progressive delivery to RBAC - takes experience and planning. At InstaDevOps, we help startups and SMBs set up production-grade Kubernetes infrastructure and deployment pipelines - starting at $2,999/mo.
Book a free 15-minute consultation to discuss your Kubernetes and deployment challenges.
Top comments (0)