Series: From "Just Put It on a Server" to Production DevOps
Reading time: 18 minutes
Level: Intermediate
The kubectl Apply Problem
Your Kubernetes cluster is running. You update an API deployment:
vim infrastructure/k8s/api.yaml
# Change replicas from 3 to 5
kubectl apply -f infrastructure/k8s/api.yaml
Two weeks later, a teammate asks:
"Hey, why do we have 5 API replicas?"
You: "I changed it two weeks ago."
"Where's the change tracked?"
You: "Uh... it's not. I ran kubectl apply locally."
"What if we need to rollback?"
You: "I'd have to... remember what it was before?"
The problems with manual kubectl apply:
- ❌ No audit trail - Who changed what, when, why?
- ❌ Configuration drift - Git says 3 replicas, cluster has 5
- ❌ No rollback - Can't revert to previous state
- ❌ Manual sync - Someone must run kubectl apply
- ❌ No approval process - Anyone with kubectl access can change anything
- ❌ Hard to debug - What's the desired state? What's actual state?
What you want:
"Git repository is the single source of truth. If it's in Git, it's deployed. If it's not in Git, it shouldn't exist."
This is GitOps.
What is GitOps?
GitOps: Using Git as the single source of truth for declarative infrastructure and applications.
Core principles:
- Declarative - System state described in Git
- Versioned - Git tracks all changes
- Immutable - Git commits are immutable
- Pulled automatically - System continuously syncs from Git
The workflow:
Developer → Git commit → Push to main → ArgoCD detects change → ArgoCD syncs cluster → Deployment updated
No kubectl commands. No manual intervention. Just Git.
Tools:
- ArgoCD - GitOps for Kubernetes (we'll use this)
- Flux - Another GitOps operator
- Jenkins X - CI/CD with GitOps
Why ArgoCD?
- ✅ Best-in-class UI
- ✅ Multi-cluster support
- ✅ Application health monitoring
- ✅ Rollback with one click
- ✅ SSO integration
- ✅ RBAC built-in
ArgoCD Architecture
┌─────────────────────────────────────────────────────┐
│ Git Repository │
│ infrastructure/k8s/ │
│ ├── api.yaml │
│ ├── worker.yaml │
│ └── postgres.yaml │
└──────────────────┬──────────────────────────────────┘
│
│ ArgoCD continuously polls
│ (every 3 minutes by default)
│
▼
┌─────────────────────────────────────────────────────┐
│ ArgoCD Controller │
│ - Compares Git state with cluster state │
│ - Detects drift │
│ - Syncs automatically (if auto-sync enabled) │
└──────────────────┬──────────────────────────────────┘
│
│ kubectl apply
│
▼
┌─────────────────────────────────────────────────────┐
│ Kubernetes Cluster │
│ Running applications │
└─────────────────────────────────────────────────────┘
Key components:
- Application Controller - Monitors Git repos, syncs cluster
- Repo Server - Fetches manifests from Git
- API Server - gRPC/REST API for management
- Web UI - Visual dashboard
- Dex - SSO integration (optional)
Installing ArgoCD
Step 1: Install ArgoCD
# Create namespace
kubectl create namespace argocd
# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Wait for pods to be ready
kubectl wait --for=condition=Ready pods --all -n argocd --timeout=5m
Check installation:
kubectl get pods -n argocd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 2m
argocd-dex-server-5dd657bd9-xyz 1/1 Running 0 2m
argocd-redis-74cb89f466-abc 1/1 Running 0 2m
argocd-repo-server-6b8d9f5d4-def 1/1 Running 0 2m
argocd-server-7d9f8c5b4-ghi 1/1 Running 0 2m
Step 2: Access ArgoCD UI
Get admin password:
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d; echo
Port forward to access UI:
kubectl port-forward svc/argocd-server -n argocd 8080:443
Open browser: https://localhost:8080
Login:
- Username:
admin - Password: [from command above]
You should see the ArgoCD dashboard! 🎉
Step 3: Install ArgoCD CLI
# macOS
brew install argocd
# Linux
curl -sSL -o /usr/local/bin/argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x /usr/local/bin/argocd
# Verify
argocd version
Login via CLI:
argocd login localhost:8080 --username admin --password [password] --insecure
Creating Your First ArgoCD Application
Step 1: Prepare Git Repository
Your repo structure:
davidbrown77/sspp/
├── infrastructure/
│ └── k8s/
│ ├── namespace.yaml
│ ├── api.yaml
│ ├── worker.yaml
│ ├── postgres.yaml
│ ├── redis.yaml
│ └── elasticsearch.yaml
Push to GitHub:
git add infrastructure/k8s/
git commit -m "Add Kubernetes manifests for ArgoCD"
git push origin main
Step 2: Create ArgoCD Application
Via UI:
- Click "New App"
- Fill in:
-
Application Name:
sspp-prod -
Project:
default -
Sync Policy:
Manual(we'll enable auto later) -
Repository URL:
https://github.com/daviesbrown/sspp.git -
Revision:
main -
Path:
infrastructure/k8s -
Cluster:
https://kubernetes.default.svc -
Namespace:
sspp-prod
-
Application Name:
- Click "Create"
Via CLI:
argocd app create sspp-prod \
--repo https://github.com/daviesbrown/sspp.git \
--path infrastructure/k8s \
--dest-server https://kubernetes.default.svc \
--dest-namespace sspp-prod
Via YAML (GitOps way!):
# infrastructure/argocd/sspp-prod-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sspp-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/daviesbrown/sspp.git
targetRevision: main
path: infrastructure/k8s
destination:
server: https://kubernetes.default.svc
namespace: sspp-prod
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Sync when cluster state drifts
allowEmpty: false
syncOptions:
- CreateNamespace=true
retry:
limit: 5
backoff:
duration: 5s
factor: 2
maxDuration: 3m
Apply:
kubectl apply -f infrastructure/argocd/sspp-prod-app.yaml
Step 3: Sync Application
In UI:
- Click on
sspp-prodapp - Click "Sync"
- Click "Synchronize"
Via CLI:
argocd app sync sspp-prod
Watch deployment:
argocd app wait sspp-prod --health
Output:
Name: sspp-prod
Project: default
Server: https://kubernetes.default.svc
Namespace: sspp-prod
URL: https://localhost:8080/applications/sspp-prod
Repo: https://github.com/daviesbrown/sspp.git
Target: main
Path: infrastructure/k8s
SyncWindow: Sync Allowed
Sync Policy: Automated
Sync Status: Synced to main (abc1234)
Health Status: Healthy
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK
Namespace sspp-prod sspp-prod Synced
Service sspp-prod api Synced Healthy
Service sspp-prod postgres Synced Healthy
apps Deployment sspp-prod api Synced Healthy
apps Deployment sspp-prod worker Synced Healthy
apps Deployment sspp-prod postgres Synced Healthy
Your application is deployed via GitOps! 🚀
Auto-Sync: True GitOps
Enable auto-sync so Git changes deploy automatically:
syncPolicy:
automated:
prune: true # Delete resources not in Git
selfHeal: true # Revert manual kubectl changes
Or via CLI:
argocd app set sspp-prod --sync-policy automated --auto-prune --self-heal
What this does:
- Every 3 minutes, ArgoCD checks Git for changes
- If Git changed, ArgoCD syncs cluster automatically
- If someone runs kubectl apply, self-heal reverts it
- If resource deleted from Git, prune deletes it from cluster
Test it:
# Edit deployment
vim infrastructure/k8s/api.yaml
# Change replicas: 3 → 5
git add infrastructure/k8s/api.yaml
git commit -m "Scale API to 5 replicas"
git push origin main
# Wait 3 minutes (or trigger manually)
argocd app sync sspp-prod
# Check
kubectl get deployment api -n sspp-prod
# DESIRED: 5
No kubectl commands. Just Git push. ✅
ArgoCD UI Features
Application Health
Health statuses:
- Healthy - All resources running, ready
- Progressing - Deployment rolling out
- Degraded - Some Pods not ready
- Suspended - Application suspended
- Missing - Resources not found
- Unknown - Health status unknown
Visual indicators:
- 🟢 Green - Healthy
- 🟡 Yellow - Progressing
- 🔴 Red - Degraded
Sync Status
Sync statuses:
- Synced - Git matches cluster
- OutOfSync - Git differs from cluster
- Unknown - Can't determine sync status
Click on resource to see diff:
- replicas: 3
+ replicas: 5
Rollback
Click "History and Rollback":
- See all previous syncs
- Click "Rollback" to revert
Via CLI:
# List sync history
argocd app history sspp-prod
# Rollback to revision 3
argocd app rollback sspp-prod 3
App Diff
Compare Git vs Cluster:
argocd app diff sspp-prod
Output:
===== apps/Deployment sspp-prod/api ======
spec:
- replicas: 3
+ replicas: 5
Multi-Environment Setup
Best practice: One Git repo, multiple ArgoCD apps for different environments.
Repository Structure
davidbrown77/sspp/
├── infrastructure/
│ └── k8s/
│ ├── base/ # Shared resources
│ │ ├── api.yaml
│ │ ├── worker.yaml
│ │ └── postgres.yaml
│ ├── overlays/
│ │ ├── dev/ # Dev-specific
│ │ │ ├── kustomization.yaml
│ │ │ └── replicas.yaml
│ │ ├── staging/ # Staging-specific
│ │ │ ├── kustomization.yaml
│ │ │ └── replicas.yaml
│ │ └── prod/ # Prod-specific
│ │ ├── kustomization.yaml
│ │ └── replicas.yaml
Using Kustomize for environment-specific overrides:
# infrastructure/k8s/overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: sspp-prod
resources:
- ../../base
replicas:
- name: api
count: 5
- name: worker
count: 10
images:
- name: davidbrown77/sspp-api
newTag: v2.1.0
commonLabels:
environment: production
Create ArgoCD apps for each environment:
# infrastructure/argocd/dev-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sspp-dev
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/daviesbrown/sspp.git
targetRevision: main
path: infrastructure/k8s/overlays/dev
destination:
server: https://kubernetes.default.svc
namespace: sspp-dev
syncPolicy:
automated:
prune: true
selfHeal: true
---
# infrastructure/argocd/staging-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sspp-staging
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/daviesbrown/sspp.git
targetRevision: main
path: infrastructure/k8s/overlays/staging
destination:
server: https://kubernetes.default.svc
namespace: sspp-staging
syncPolicy:
automated:
prune: true
selfHeal: true
---
# infrastructure/argocd/prod-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sspp-prod
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/daviesbrown/sspp.git
targetRevision: main
path: infrastructure/k8s/overlays/prod
destination:
server: https://kubernetes.default.svc
namespace: sspp-prod
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
Deploy all environments:
kubectl apply -f infrastructure/argocd/
Now you have:
- Dev environment (auto-synced from Git)
- Staging environment (auto-synced from Git)
- Prod environment (auto-synced from Git)
One Git push → all environments update automatically.
App of Apps Pattern
Problem: Managing many ArgoCD applications manually.
Solution: ArgoCD application that manages other applications.
# infrastructure/argocd/root-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sspp-apps
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/daviesbrown/sspp.git
targetRevision: main
path: infrastructure/argocd
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
Apply once:
kubectl apply -f infrastructure/argocd/root-app.yaml
Now:
-
sspp-appsmanages all child applications - Add new app → commit to Git → ArgoCD creates it
- Remove app → delete from Git → ArgoCD removes it
This is GitOps managing GitOps! 🤯
Progressive Delivery with ArgoCD
Blue/Green Deployments
Install ArgoCD Rollouts:
kubectl create namespace argo-rollouts
kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml
Define Rollout:
# infrastructure/k8s/api-rollout.yaml
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: api
namespace: sspp-prod
spec:
replicas: 5
revisionHistoryLimit: 2
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: davidbrown77/sspp-api:v2.0.0
ports:
- containerPort: 3000
strategy:
blueGreen:
activeService: api
previewService: api-preview
autoPromotionEnabled: false
scaleDownDelaySeconds: 30
Services:
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
ports:
- port: 80
targetPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: api-preview
spec:
selector:
app: api
ports:
- port: 80
targetPort: 3000
Deploy new version:
# Update image tag in Git
vim infrastructure/k8s/api-rollout.yaml
# Change image: davidbrown77/sspp-api:v2.1.0
git commit -am "Deploy API v2.1.0"
git push
# ArgoCD syncs, creates preview environment
# Test preview: curl http://api-preview:80
# Promote to production
kubectl argo rollouts promote api -n sspp-prod
Canary Deployments
strategy:
canary:
steps:
- setWeight: 10
- pause: {duration: 5m}
- setWeight: 25
- pause: {duration: 5m}
- setWeight: 50
- pause: {duration: 5m}
- setWeight: 75
- pause: {duration: 5m}
What this does:
- Deploy new version to 10% of Pods
- Wait 5 minutes
- If healthy, increase to 25%
- Continue until 100%
If error rate spikes, rollback automatically.
ArgoCD Notifications
Get notified when deployments happen.
Install ArgoCD Notifications
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj-labs/argocd-notifications/stable/manifests/install.yaml
Configure Slack Notifications
# infrastructure/argocd/notifications-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-notifications-cm
namespace: argocd
data:
service.slack: |
token: $slack-token
template.app-deployed: |
message: |
Application {{.app.metadata.name}} deployed!
Revision: {{.app.status.sync.revision}}
Author: {{.app.status.operationState.operation.initiatedBy.username}}
slack:
attachments: |
[{
"title": "{{.app.metadata.name}}",
"title_link": "{{.context.argocdUrl}}/applications/{{.app.metadata.name}}",
"color": "#18be52",
"fields": [{
"title": "Sync Status",
"value": "{{.app.status.sync.status}}",
"short": true
}, {
"title": "Health Status",
"value": "{{.app.status.health.status}}",
"short": true
}]
}]
trigger.on-deployed: |
- when: app.status.operationState.phase in ['Succeeded']
send: [app-deployed]
Add annotations to applications:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sspp-prod
annotations:
notifications.argoproj.io/subscribe.on-deployed.slack: deployments-channel
spec:
# ... rest of spec
Now Slack gets notified on every deployment! 📢
RBAC for ArgoCD
Control who can deploy what.
# infrastructure/argocd/rbac-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
# Developers can sync dev apps
p, role:developer, applications, sync, sspp-dev, allow
p, role:developer, applications, get, *, allow
g, developers@example.com, role:developer
# DevOps can sync all apps
p, role:devops, applications, *, *, allow
g, devops@example.com, role:devops
# Admins can do everything
p, role:admin, *, *, *, allow
g, admins@example.com, role:admin
Now:
- Developers can only deploy to dev
- DevOps can deploy to all environments
- Admins have full control
CI/CD Integration
Combine GitHub Actions + ArgoCD:
# .github/workflows/api.yml
name: API CI/CD
on:
push:
branches: [main]
paths: ['services/api/**']
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build and push Docker image
run: |
docker build -t davidbrown77/sspp-api:${{ github.sha }} services/api
docker push davidbrown77/sspp-api:${{ github.sha }}
- name: Update manifest
run: |
# Update image tag in Git
sed -i "s|davidbrown77/sspp-api:.*|davidbrown77/sspp-api:${{ github.sha }}|" \
infrastructure/k8s/overlays/prod/kustomization.yaml
git config user.name "GitHub Actions"
git config user.email "actions@github.com"
git add infrastructure/k8s/overlays/prod/kustomization.yaml
git commit -m "Update API image to ${{ github.sha }}"
git push
# ArgoCD detects change and deploys automatically!
Flow:
- Push code → GitHub Actions
- GitHub Actions builds Docker image
- GitHub Actions updates manifest in Git
- ArgoCD detects Git change
- ArgoCD deploys new image
Complete automation! 🚀
Disaster Recovery with ArgoCD
Backup ArgoCD Configuration
# Export all applications
argocd app list -o yaml > argocd-apps-backup.yaml
# Export projects
kubectl get appprojects -n argocd -o yaml > argocd-projects-backup.yaml
# Export RBAC
kubectl get cm argocd-rbac-cm -n argocd -o yaml > argocd-rbac-backup.yaml
Restore ArgoCD
# Reinstall ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Restore applications
kubectl apply -f argocd-apps-backup.yaml
kubectl apply -f argocd-projects-backup.yaml
kubectl apply -f argocd-rbac-backup.yaml
# ArgoCD syncs everything from Git!
Because everything is in Git, recovery is fast.
ArgoCD Best Practices
1. Use App of Apps Pattern
One root app manages all child apps.
2. Enable Auto-Sync + Self-Heal
syncPolicy:
automated:
prune: true
selfHeal: true
Git is the source of truth. Period.
3. Use Kustomize or Helm
Don't duplicate manifests. Use overlays.
4. Set Up Notifications
Slack/Email alerts for deployments.
5. Implement RBAC
Restrict who can deploy to production.
6. Use Projects for Multi-Tenancy
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: team-backend
spec:
sourceRepos:
- https://github.com/daviesbrown/sspp.git
destinations:
- namespace: 'backend-*'
server: https://kubernetes.default.svc
7. Monitor ArgoCD Metrics
kubectl port-forward svc/argocd-metrics -n argocd 8082:8082
curl http://localhost:8082/metrics
Integrate with Prometheus.
What We Solved
✅ Git as source of truth - All configs in Git
✅ Automated deployments - Push → ArgoCD deploys
✅ Self-healing - Manual changes reverted automatically
✅ Audit trail - Git history = deployment history
✅ Easy rollback - One-click revert in UI
✅ Multi-environment - Dev, staging, prod from same repo
✅ Progressive delivery - Blue/green, canary deployments
✅ Notifications - Slack alerts for deployments
What's Next?
We have automated GitOps deployments! But production isn't just about deployment:
❌ No observability - Can't see what's happening in production
❌ Cost unknown - Don't know what we're spending
❌ Security gaps - Secrets in Git, no network policies
❌ No disaster recovery - What if everything crashes?
In Part 10 (final article), we'll tackle Production Operations.
You'll learn:
- Observability stack (Prometheus, Grafana, Loki)
- Cost optimization strategies (right-sizing, spot instances)
- Security hardening (network policies, RBAC, secrets management)
- Disaster recovery and backup strategies
- Scaling beyond one cluster (multi-region, multi-cloud)
This is where you transition from "it works" to "it's production-ready."
The Complete DevOps Journey
You've now mastered:
- Manual deployment (Part 1) - SSH, npm start
- Process management (Part 2) - PM2
- Containerization (Part 3) - Docker
- Orchestration (Part 4) - Docker Compose
- Why Kubernetes (Part 5) - Limitations of Compose
- Kubernetes (Part 6) - Self-healing, scaling
- Infrastructure as Code (Part 7) - Terraform
- Helm Packaging (Part 8) - YAML templating
- GitOps (Part 9) - ArgoCD
- Production Operations (Part 10) - Scaling, security, cost
Almost there. One more article to production-grade DevOps.
Final Architecture
┌─────────────────────────────────────────────────────┐
│ Git Repository │
│ - Application code (services/api, services/worker)│
│ - Kubernetes manifests (infrastructure/k8s) │
│ - Terraform configs (infrastructure/terraform) │
└───────┬─────────────────────────────────────┬───────┘
│ │
│ Push code │ Push manifests
│ │
▼ ▼
┌──────────────────┐ ┌─────────────────────┐
│ GitHub Actions │ │ ArgoCD │
│ - Test code │ │ - Detects changes │
│ - Build image │ │ - Syncs cluster │
│ - Push to GHCR │ │ - Self-heals │
│ - Update Git │ └─────────┬───────────┘
└──────────────────┘ │
│ kubectl apply
│
┌─────────────────▼──────────────┐
│ Kubernetes Cluster (LKE) │
│ - API (2-10 replicas, HPA) │
│ - Worker (3-15 replicas, HPA) │
│ - PostgreSQL (HA) │
│ - Redis, Elasticsearch │
│ - Observability stack │
└────────────────────────────────┘
│
Managed by Terraform
│
┌─────────▼──────────┐
│ Linode Cloud │
│ - LKE cluster │
│ - NodeBalancer │
│ - Object storage │
│ - DNS │
└────────────────────┘
Everything automated. Everything in Git. Everything production-grade.
Try It Yourself
Final challenge: Set up complete GitOps workflow:
- Install ArgoCD in your cluster
- Create ArgoCD application pointing to your Git repo
- Enable auto-sync and self-heal
- Push a change to Git, watch ArgoCD deploy
- Manually change a deployment with kubectl, watch self-heal revert it
- Set up multi-environment (dev, staging, prod)
- Implement blue/green deployment with Argo Rollouts
- Add Slack notifications
- Configure RBAC
Bonus: Implement app of apps pattern for managing all applications.
Discussion
Do you use ArgoCD? Flux? Manual kubectl? What's your deployment strategy?
Share on GitHub Discussions.
Previous: Part 8: Helm - Packaging Kubernetes Applications
Next: Part 10: Scaling, Failure & Operating Like a Real Company
About the Author
I built this 12-part series to document my complete DevOps journey from manual server deployment to production-grade, GitOps-powered Kubernetes infrastructure.
This series demonstrates:
- Systems thinking - Understanding why tools exist, not just how to use them
- Production experience - Real code, real failures, real solutions
- Teaching ability - Complex topics explained clearly
- DevOps mastery - CI/CD, IaC, GitOps, observability, security
This is my Proton.ai application.
- GitHub: @daviesbrown
- LinkedIn: David Nwosu Brown
- Portfolio: Sales Signal Processing Platform
You started by SSH-ing into a server and running npm start.
You ended with a production-grade, cloud-native, GitOps-powered SaaS platform.
You're ready. Go build something amazing. 🚀
Top comments (1)
Nice post, but
infrastructure/argocd/root-app.yamldoesn't exist in your repo.