Building a Production-Grade DevSecOps Pipeline on AWS: A Complete Guide
Series Overview: This 10-part series walks you through building a real-world, production-grade DevSecOps platform on AWS from scratch — the same architecture used at mature engineering organizations. By the end, you will have six EKS clusters, a GitOps delivery model, a hardened CI/CD pipeline, runtime security, full-stack observability, and automated disaster recovery.
Live system: Everything in this series is running in production right now.
Check it:curl https://www.matthewoladipupo.dev/health
→{"status":"healthy","region":"us-east-1"}
This is Part 1 of a 10-part series. You can follow the full series here on dev.to.
Part 1: Architecture Overview & What We Are Building
Introduction
Most DevSecOps tutorials show you one piece of the puzzle — a Kubernetes cluster here, a CI pipeline there. This series is different. We build the entire platform end to end: infrastructure-as-code, multi-environment clusters, GitOps, security policy enforcement, runtime threat detection, secrets management, observability, canary deployments, autoscaling, and backup — all wired together the way a production engineering team would actually build it.
Every component in this guide is running live. The screenshots you will see throughout this series come from the actual deployment at matthewoladipupo.dev.
The complete system: GitHub → CI/CD → ECR → ArgoCD hub → 6 EKS clusters
across 3 environments and 2 AWS regions. Every component is covered in this
10-part series.
What you will build:
| Layer | Tools |
|---|---|
| Infrastructure as Code | Terraform + Terragrunt |
| Cloud Platform | AWS (multi-account, multi-region) |
| Container Orchestration | Amazon EKS (6 clusters) |
| GitOps Delivery | ArgoCD (hub-spoke) |
| CI/CD Pipeline | GitHub Actions |
| Container Security | Trivy, Cosign, Distroless images |
| Policy Enforcement | Kyverno |
| Runtime Security | Falco |
| Secrets Management | AWS Secrets Manager + External Secrets Operator |
| Monitoring | Prometheus + Grafana (kube-prometheus-stack) |
| Logging | Fluent Bit → AWS CloudWatch |
| Canary Deployments | Argo Rollouts |
| Autoscaling | Karpenter + HPA |
| Backup & DR | Velero + S3 |
| Web Application Firewall | AWS WAF v2 |
| Threat Detection | AWS GuardDuty |
| DNS & TLS | Route53 + ACM (wildcard cert) |
| Load Balancing | AWS Load Balancer Controller |
High-Level Architecture
The platform follows a hub-spoke GitOps model across three environments and two AWS regions.
┌─────────────────────────────────────────────────────────────────────────────────┐
│ AWS ORGANIZATION │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────────────┐ │
│ │ Management Acct │ │ Dev Account │ │ Staging Account │ │
│ │ (ECR, CI Audit) │ │ 557702566877 │ │ │ │
│ │ │ │ ┌─────────────┐ │ │ ┌────────┐ ┌────────┐ │ │
│ │ ┌───────────┐ │ │ │ EKS use1 │ │ │ │EKS use1│ │EKS usw2│ │ │
│ │ │ECR: myapp │ │ │ │ (public ep) │ │ │ │ │ │ │ │ │
│ │ └───────────┘ │ │ └─────────────┘ │ │ └────────┘ └────────┘ │ │
│ └──────────────────┘ │ ┌─────────────┐ │ └──────────────────────────┘ │
│ │ │ EKS usw2 │ │ │
│ │ │ (public ep) │ │ ┌──────────────────────────┐ │
│ │ └─────────────┘ │ │ Production Account │ │
│ └──────────────────┘ │ 591120834781 │ │
│ │ │ │
│ │ ┌────────┐ ┌────────┐ │ │
│ │ │EKS use1│ │EKS usw2│ │ │
│ │ │ HUB │ │ Spoke │ │ │
│ │ │(ArgoCD)│ │ │ │ │
│ │ └────────┘ └────────┘ │ │
│ └──────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────────────┘
Detailed Architecture Diagram
DEVELOPER WORKFLOW
──────────────────
git push → GitHub
│
▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│ GITHUB ACTIONS CI PIPELINE │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────────────┐ │
│ │ Lint + │ │ Trivy │ │ Docker │ │ Cosign │ │ Push to ECR │ │
│ │ Test │→ │ Scan │→ │ Build │→ │ Sign │→ │ (multi-region) │ │
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────────────┘ │
│ │ │
│ OIDC (no static keys) │ │
│ IAM Roles per cluster │ │
└────────────────────────────────────────────────────────────────────┼────────────┘
│
▼
myapp-gitops repo
(image tag updated)
│
▼
┌─────────────────────────────────────────────────────────────────────────────────┐
│ ARGOCD HUB (myapp-production-use1) │
│ │
│ ApplicationSets (list generators per cluster) │
│ ┌─────────────────────────────────────────────────────────────────────────┐ │
│ │ environments/ infrastructure/ argocd/ │ │
│ │ ├─ dev ├─ monitoring └─ project-*.yaml │ │
│ │ ├─ staging ├─ logging │ │
│ │ └─ production ├─ eso │ │
│ │ ├─ kyverno │ │
│ │ ├─ falco │ │
│ │ ├─ velero │ │
│ │ ├─ karpenter │ │
│ │ └─ argo-rollouts │ │
│ └─────────────────────────────────────────────────────────────────────────┘ │
│ │
│ Syncs to ──────────────────────────────────────────────────────────────────► │
└──────┬──────────────────────────────────────────────────────────────────────────┘
│ VPC Peering (private endpoints)
├────────────────────────────────► myapp-production-usw2
├────────────────────────────────► myapp-staging-use1
├────────────────────────────────► myapp-staging-usw2
├────────────────────────────────► myapp-dev-use1
└────────────────────────────────► myapp-dev-usw2
Per-Cluster Component Stack
Every cluster runs the same security and observability baseline. The diagram below shows what runs on each cluster after bootstrapping:
┌──────────────────────────────────────────────────────────────────────┐
│ EKS CLUSTER (per-cluster stack) │
│ │
│ SYSTEM NAMESPACES │
│ ┌─────────────┐ ┌──────────────────┐ ┌────────────────────────┐ │
│ │ kube-system│ │ kyverno │ │ falco │ │
│ │ (aws-lbc, │ │ (policy engine) │ │ (runtime security) │ │
│ │ coreDNS) │ │ │ │ │ │
│ └─────────────┘ └──────────────────┘ └────────────────────────┘ │
│ │
│ ┌─────────────────────┐ ┌───────────────────────────────────────┐ │
│ │ external-secrets │ │ monitoring │ │
│ │ (ESO operator) │ │ Prometheus ── Grafana ── Alertmanager│ │
│ └─────────────────────┘ └───────────────────────────────────────┘ │
│ │
│ ┌───────────────────────┐ ┌──────────────────┐ ┌─────────────┐ │
│ │ logging │ │ velero │ │ karpenter │ │
│ │ (Fluent Bit DS) │ │ (backup) │ │ (prod only)│ │
│ └───────────────────────┘ └──────────────────┘ └─────────────┘ │
│ │
│ ┌───────────────────────┐ ┌──────────────────────────────────────┐ │
│ │ argo-rollouts │ │ myapp (prod) / myapp (dev/staging) │ │
│ │ (canary controller) │ │ Rollout ──► canary ──► stable │ │
│ └───────────────────────┘ └──────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────────┘
Network Architecture
AWS REGION: us-east-1
┌───────────────────────────────────────────────────────────┐
│ VPC: production-use1 (10.20.0.0/16) │
│ │
│ ┌──────────────────────────┐ ┌──────────────────────┐ │
│ │ Public Subnets │ │ Private Subnets │ │
│ │ 10.20.0.0/24 (us-east-1a) 10.20.8.0/21 (use1a) │ │
│ │ 10.20.1.0/24 (us-east-1b) 10.20.16.0/21 (use1b) │ │
│ │ 10.20.2.0/24 (us-east-1c) 10.20.24.0/21 (use1c) │ │
│ │ │ │ │ │
│ │ NAT Gateways │ │ EKS Node Groups │ │
│ │ Internet Gateway │ │ EKS API endpoint │ │
│ │ ALB (internet-facing) │ │ (private only) │ │
│ └──────────────────────────┘ └──────────────────────┘ │
└───────────────────────────────────────────────────────────┘
│ │
│ VPC Peering │
│ (private, encrypted) │
▼ ▼
┌────────────────────┐ ┌────────────────────────────────┐
│ VPC: prod-usw2 │ │ VPC: staging-use1 │
│ (10.21.0.0/16) │ │ (10.10.0.0/16) │
└────────────────────┘ └────────────────────────────────┘
Internet Traffic Flow:
User → Route53 (latency routing) → ALB → AWS WAF → ALB Target Group
→ Pod (via ALB target type: ip, directly to pod IP)
→ Response back to user
CI/CD Pipeline Flow
┌─────────────────────────────────────────────────────────────────────┐
│ GitHub: MatthewDipo/myapp │
│ │
│ Developer: git push origin main │
└──────────────────────────────┬──────────────────────────────────────┘
│ triggers
▼
┌─────────────────────────────────────────────────────────────────────┐
│ GitHub Actions: .github/workflows/ci.yaml │
│ │
│ Job 1: lint-and-test │
│ └─ npm ci && npm test │
│ │
│ Job 2: scan (needs: lint-and-test) │
│ └─ trivy image --severity HIGH,CRITICAL │
│ │
│ Job 3: build-push-sign (needs: scan) │
│ ├─ OIDC → assume IAM role (no static AWS keys) │
│ ├─ docker build (distroless:nonroot base) │
│ ├─ docker push → ECR us-east-1 (management account) │
│ ├─ docker push → ECR us-west-2 (management account) │
│ ├─ cosign sign --key awskms:// (KMS signing key) │
│ └─ Write S3 audit log (image digest + timestamp) │
│ │
│ Job 4: update-gitops (needs: build-push-sign) │
│ └─ Patch myapp-gitops/apps/myapp/values-*.yaml (image.tag) │
└──────────────────────────────┬──────────────────────────────────────┘
│ git push to myapp-gitops
▼
┌─────────────────────────────────────────────────────────────────────┐
│ GitHub: MatthewDipo/myapp-gitops │
│ ArgoCD detects diff → triggers sync per cluster │
│ │
│ Production: Argo Rollouts Canary │
│ ├─ Step 1: setWeight 20% (canary gets 20% traffic) │
│ ├─ Step 2: pause 5 minutes │
│ ├─ Step 3: AnalysisRun (check error rate < 1%) │
│ └─ Step 4: setWeight 100% (promote to stable) │
│ │
│ Dev/Staging: Rolling Update (immediate) │
└─────────────────────────────────────────────────────────────────────┘
Security Architecture
┌─────────────────────────────────────────────────────────────────────────┐
│ SECURITY LAYERS │
│ │
│ Layer 1: SUPPLY CHAIN SECURITY │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ Source: GitHub branch protection + required reviews │ │
│ │ Build: Trivy CVE scan (fails pipeline on HIGH/CRITICAL) │ │
│ │ Image: Distroless base (no shell, no package manager) │ │
│ │ Sign: Cosign + AWS KMS (cryptographic attestation) │ │
│ │ Verify: Kyverno policy blocks unsigned images at admission │ │
│ └────────────────────────────────────────────────────────────────────┘ │
│ │
│ Layer 2: INFRASTRUCTURE SECURITY │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ Network: Private EKS endpoints (staging/prod) │ │
│ │ VPC peering (no internet traversal between clusters) │ │
│ │ NetworkPolicies (deny-all default, allow explicitly) │ │
│ │ IAM: IRSA (pod-level IAM, no node-level credentials) │ │
│ │ OIDC for GitHub Actions (no static AWS keys in CI) │ │
│ │ Secrets: AWS Secrets Manager (never stored in Git) │ │
│ │ KMS: Envelope encryption for EKS secrets + ECR + S3 │ │
│ └────────────────────────────────────────────────────────────────────┘ │
│ │
│ Layer 3: WORKLOAD ADMISSION CONTROL (Kyverno) │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ ✗ Block privileged containers │ │
│ │ ✗ Block hostPath volume mounts │ │
│ │ ✗ Block containers running as root (uid=0) │ │
│ │ ✗ Block images without valid Cosign signature │ │
│ │ ✗ Block missing resource limits │ │
│ │ ✓ Allow myapp from ECR with valid signature │ │
│ └────────────────────────────────────────────────────────────────────┘ │
│ │
│ Layer 4: RUNTIME THREAT DETECTION (Falco) │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ Alert on: shell spawned in container │ │
│ │ Alert on: sensitive file read (/etc/shadow, /etc/passwd) │ │
│ │ Alert on: unexpected outbound network connection │ │
│ │ Alert on: privilege escalation attempts │ │
│ │ Output: CloudWatch Logs (via Fluent Bit) │ │
│ └────────────────────────────────────────────────────────────────────┘ │
│ │
│ Layer 5: PERIMETER SECURITY │
│ ┌────────────────────────────────────────────────────────────────────┐ │
│ │ AWS WAF v2: Managed rules (OWASP Top 10, SQL injection, XSS) │ │
│ │ AWS GuardDuty: Account-level threat intelligence │ │
│ │ ACM: TLS 1.2+ enforced, HTTP → HTTPS redirect │ │
│ └────────────────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────────────┘
GitOps Repository Structure
Two GitHub repositories drive everything after the CI pipeline builds the image:
MatthewDipo/myapp-gitops/
│
├── argocd/
│ ├── project-dev.yaml # AppProject: dev clusters
│ ├── project-staging.yaml # AppProject: staging clusters
│ └── project-production.yaml # AppProject: production clusters
│
├── environments/
│ ├── dev/
│ │ └── applicationset.yaml # Deploys myapp to dev-use1, dev-usw2
│ ├── staging/
│ │ └── applicationset.yaml # Deploys myapp to staging-use1, staging-usw2
│ └── production/
│ └── applicationset.yaml # Deploys myapp to prod-use1, prod-usw2
│
├── infrastructure/
│ ├── monitoring/
│ │ ├── applicationset.yaml # kube-prometheus-stack (4 clusters)
│ │ ├── prometheus-values.yaml
│ │ └── alert-rules/
│ │ └── applicationset.yaml # PrometheusRule CRDs
│ ├── logging/
│ │ └── applicationset.yaml # Fluent Bit DaemonSet (6 clusters)
│ ├── eso/
│ │ └── applicationset.yaml # External Secrets Operator (6 clusters)
│ ├── kyverno/
│ │ └── applicationset.yaml # Kyverno + policies (6 clusters)
│ ├── falco/
│ │ └── applicationset.yaml # Falco DaemonSet (6 clusters)
│ ├── velero/
│ │ └── applicationset.yaml # Velero (6 clusters)
│ ├── karpenter/
│ │ ├── applicationset.yaml # Karpenter controller (2 prod)
│ │ └── nodepools/
│ │ └── applicationset.yaml # NodePool + EC2NodeClass CRDs
│ └── argo-rollouts/
│ └── applicationset.yaml # Argo Rollouts controller (2 prod)
│
└── apps/
└── myapp/ # Helm chart for the application
├── Chart.yaml
├── values.yaml # Default values
├── values-dev.yaml
├── values-staging.yaml
├── values-production.yaml
└── templates/
├── deployment.yaml # Deployment OR Rollout (conditional)
├── service.yaml
├── service-canary.yaml # Canary service (prod only)
├── ingress.yaml
├── hpa.yaml
├── networkpolicy.yaml
├── serviceaccount.yaml
├── external-secret.yaml
├── servicemonitor.yaml # Prometheus scraping
└── analysis-template.yaml
Infrastructure Repository Structure
MatthewDipo/myapp-infra/
│
├── _modules/ # Reusable Terraform modules
│ ├── vpc/
│ ├── eks/
│ ├── kms/
│ ├── iam/
│ ├── ecr/
│ ├── waf/
│ ├── guardduty/
│ ├── eso-irsa/
│ ├── fluent-bit-irsa/
│ ├── karpenter/
│ └── velero/
│
└── live/ # Terragrunt configurations (per env/region)
├── terragrunt.hcl # Root config (provider, remote state)
├── dev/
│ ├── us-east-1/
│ │ ├── vpc/terragrunt.hcl
│ │ ├── kms/terragrunt.hcl
│ │ ├── eks/terragrunt.hcl
│ │ ├── iam/terragrunt.hcl
│ │ └── fluent-bit-irsa/terragrunt.hcl
│ └── us-west-2/
│ └── ... (mirror of use1)
├── staging/
│ ├── us-east-1/
│ │ ├── vpc/ kms/ eks/ iam/
│ │ ├── waf/terragrunt.hcl
│ │ ├── guardduty/terragrunt.hcl
│ │ ├── eso-irsa/terragrunt.hcl
│ │ └── fluent-bit-irsa/terragrunt.hcl
│ └── us-west-2/
│ └── ...
└── production/
├── us-east-1/
│ ├── vpc/ kms/ eks/ iam/
│ ├── waf/
│ ├── guardduty/
│ ├── eso-irsa/
│ ├── fluent-bit-irsa/
│ ├── karpenter/
│ └── velero/
└── us-west-2/
└── ...
AWS Account Structure
┌──────────────────────────────────────────────┐
│ AWS Organizations Root │
│ │
│ ┌───────────────────────────────────────┐ │
│ │ Management / Root Account │ │
│ │ • AWS SSO (Identity Center) │ │
│ │ • ECR repositories (shared) │ │
│ │ • S3 CI audit bucket │ │
│ │ • GitHub OIDC provider │ │
│ └───────────────────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Dev │ │ Staging │ │Production│ │
│ │ Account │ │ Account │ │ Account │ │
│ │ │ │ │ │ │ │
│ │ 2x EKS │ │ 2x EKS │ │ 2x EKS │ │
│ │ VPCs │ │ VPCs │ │ VPCs │ │
│ │ KMS keys │ │ KMS keys │ │ KMS keys │ │
│ │ Secrets │ │ Secrets │ │ Secrets │ │
│ │ Manager │ │ Manager │ │ Manager │ │
│ └──────────┘ └──────────┘ └──────────┘ │
└──────────────────────────────────────────────┘
Technology Decision Rationale
Understanding WHY each tool was chosen matters as much as knowing HOW to configure it.
Terragrunt over plain Terraform
Terragrunt provides DRY (Don't Repeat Yourself) configuration. Without it, you would have near-identical provider, backend, and module blocks repeated across 18+ directories. Terragrunt's include and dependency blocks eliminate 90% of that duplication while keeping each environment's overrides explicit and auditable.
ArgoCD hub-spoke over fleet management per cluster
Running ArgoCD on every cluster is operationally expensive. The hub-spoke model means one ArgoCD installation manages all six clusters via VPC peering. This single pane of glass dramatically simplifies debugging — you see all cluster states in one place.
Kyverno over OPA/Gatekeeper
Kyverno policies are written in YAML and operate on the same resource schema as Kubernetes objects. OPA/Gatekeeper requires learning Rego, a purpose-built policy language. For Kubernetes-native teams, Kyverno is faster to adopt and maintain.
External Secrets Operator over Sealed Secrets
Sealed Secrets encrypts secrets and commits them to Git — meaning the encrypted value is your source of truth. ESO keeps secrets out of Git entirely: the secret lives in AWS Secrets Manager, and ESO fetches it at runtime with IRSA credentials. This is a fundamentally stronger posture because a compromised Git repo never exposes secret material.
Argo Rollouts over plain Kubernetes rolling updates
Rolling updates are binary — you either roll forward or roll back. Argo Rollouts adds weighted traffic splitting between stable and canary versions, analysis runs (automated metric-based promotion gates), and pause steps for manual inspection. A canary deployment that automatically fails back on a rising error rate is far safer than a rolling update you monitor manually.
Distroless base images
The Google distroless nonroot image contains only the application runtime and its direct dependencies — no shell (sh, bash), no package manager (apt, apk), no curl or wget. If an attacker achieves code execution inside the container, they have almost no tools available to escalate or exfiltrate. Combined with Falco alerting on shell spawning, you get both prevention and detection.
Cost Estimate
These are rough estimates for running the full stack 24/7 in AWS us-east-1 + us-west-2. Production workloads should be sized to actual usage.
| Component | Approx. Monthly Cost |
|---|---|
| 6x EKS cluster control planes | ~$216 ($0.10/hr × 6) |
| 12x EC2 t3.medium nodes (2 per cluster) | ~$300 |
| 6x EBS volumes (gp2, 50–100GB each) | ~$60 |
| NAT Gateways (2 per VPC, 6 VPCs) | ~$250 |
| ALBs (production + monitoring) | ~$30 |
| Route53 hosted zone + queries | ~$5 |
| ACM (free) | $0 |
| ECR storage | ~$5 |
| CloudWatch logs (Fluent Bit) | ~$15 |
| S3 (Velero backups, CI audit) | ~$10 |
| AWS WAF | ~$15 |
| GuardDuty | ~$10 |
| Secrets Manager | ~$2 |
| Total | ~$918/month |
Cost reduction tip: For a demo/learning setup, use 1 node per cluster,
t3.smallinstances, and skip the second region. This brings the cost to approximately $200–300/month.
Prerequisites
Before starting Part 2, ensure you have the following:
Tools to install:
# AWS CLI v2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip && sudo ./aws/install
# Terraform 1.6+
brew install terraform # or download from terraform.io
# Terragrunt 0.54+
brew install terragrunt # or download from terragrunt.gruntwork.io
# kubectl
curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Helm 3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# ArgoCD CLI
curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
sudo install -m 555 argocd /usr/local/bin/argocd
# Cosign
curl -sSfL https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64 \
-o cosign && sudo install -m 0755 cosign /usr/local/bin/cosign
# Trivy
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo "deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" \
| sudo tee /etc/apt/sources.list.d/trivy.list
sudo apt-get update && sudo apt-get install trivy
AWS accounts needed:
- AWS Organization root account (or a management account)
- Three member accounts: dev, staging, production
- AWS SSO (IAM Identity Center) configured
GitHub:
- Two repositories:
myapp(application code) andmyapp-gitops(manifests) - A Personal Access Token for ArgoCD to pull from the GitOps repo
Domain name:
- A registered domain you control (we use
matthewoladipupo.dev) - Nameservers pointed to Route53
Series Roadmap
| Part | Title |
|---|---|
| Part 1 | Architecture Overview (this article) |
| Part 2 | AWS Foundation: Organizations, SSO, and Account Setup |
| Part 3 | Infrastructure as Code: Terraform Modules + Terragrunt |
| Part 4 | EKS Multi-Cluster: Six Clusters Across Two Regions |
| Part 5 | GitOps with ArgoCD: Hub-Spoke Model |
| Part 6 | CI/CD Pipeline: GitHub Actions, Trivy, Cosign, ECR |
| Part 7 | Secrets Management: AWS Secrets Manager + ESO + IRSA |
| Part 8 | Security Stack: Kyverno, Falco, WAF, GuardDuty |
| Part 9 | Observability: Prometheus, Grafana, Fluent Bit, CloudWatch |
| Part 10 | Resilience: Karpenter, HPA, Argo Rollouts, Velero |
| Live Data | Appendix |
| Codes | Runbook |
Screenshot Placeholders
SCREENSHOT: ArgoCD UI showing all 6 clusters registered and all ApplicationSets synced
SCREENSHOT: Grafana dashboard — Node CPU/Memory overview across production clusters
SCREENSHOT: GitHub Actions workflow showing all steps passing (lint → scan → build → sign → push → gitops update)
SCREENSHOT: AWS ECR showing signed image with cosign attestation tag
If this was useful, follow me on dev.to — I will publish Part 2 next Wednesday covering the AWS Organizations + IAM Identity Center setup.
GitHub: The companion infrastructure repo is at
MatthewDipo/myapp-infra
Questions? Drop them in the comments — I read and reply to every one.
Next up: Part 2 — AWS Foundation: Organizations, SSO, and Account Setup
Source code: myapp-infra | myapp-gitops | myapp
Runbook: Day-2 Operations Guide — every operational procedure for this pipeline.




Top comments (0)