DEV Community

jobayer ahmed
jobayer ahmed

Posted on

Managing Multi-Cluster Deployments with ArgoCD: Complete Setup Guide for Kubernetes

Introduction

Managing applications across multiple Kubernetes clusters can be challenging. Whether you're running development, staging, and production environments, or managing geographically distributed clusters, you need a consistent and reliable deployment strategy. ArgoCD, a declarative GitOps continuous delivery tool, provides an elegant solution to this problem.

In this guide, we'll explore how to set up ArgoCD in a single management cluster and use it to deploy applications across multiple Kubernetes clusters. This approach eliminates the need to install ArgoCD in every cluster, simplifying your infrastructure and reducing operational overhead.

Why Multi-Cluster ArgoCD?

The Challenge

Organizations often run multiple Kubernetes clusters for various reasons:

  • Environment Separation: Development, staging, and production clusters
  • Geographic Distribution: Clusters in different regions for low latency
  • Compliance Requirements: Data residency regulations requiring regional clusters
  • Resource Isolation: Separating workloads by team or business unit
  • High Availability: Active-active or active-passive setups across data centers

Managing deployments across these clusters traditionally requires either:

  • Installing separate CD tools in each cluster (expensive and complex)
  • Using custom scripts (error-prone and hard to maintain)
  • Manual deployments (slow and inconsistent)

The ArgoCD Solution

ArgoCD's multi-cluster capability offers significant advantages:

Centralized Management: Deploy and manage applications across all clusters from a single control plane. You get one dashboard to view the health of applications across your entire infrastructure.

Consistent GitOps Workflow: Use the same Git-based workflow for all clusters. Your Git repository remains the single source of truth, whether deploying to one cluster or ten.

Reduced Infrastructure Costs: Install ArgoCD once in a management cluster instead of maintaining separate instances. This reduces CPU, memory, and storage requirements significantly.

Simplified Operations: Fewer ArgoCD installations mean fewer upgrades, fewer security patches, and less monitoring overhead.

Disaster Recovery: If a workload cluster fails, your ArgoCD instance survives in the management cluster, making recovery easier.

RBAC and Security: Centralized access control and audit logging across all deployments. You can enforce organization-wide policies from one place.

Architecture Overview

In a multi-cluster ArgoCD setup:

  1. Management Cluster: Hosts the ArgoCD installation (API server, repository server, application controller)
  2. Workload Clusters: Run your applications but don't need ArgoCD installed
  3. Git Repository: Stores your application manifests and serves as the source of truth
  4. Communication: ArgoCD connects to workload clusters via Kubernetes API using service account tokens

Prerequisites

Before starting, ensure you have:

  • Multiple Kubernetes clusters (1.19+)
  • kubectl configured with access to all clusters
  • helm (v3+) installed
  • A wildcard TLS certificate (optional, for HTTPS)
  • Administrative access to all clusters

Part 1: Setting Up ArgoCD in the Management Cluster

Step 1: Create Namespace and Prepare TLS

First, create the ArgoCD namespace in your management cluster:

kubectl create namespace argocd
Enter fullscreen mode Exit fullscreen mode

If you have a wildcard TLS certificate, copy it to the ArgoCD namespace:

kubectl create secret tls tls-ca \
  --cert=tls.pem \
  --key=tls.key \
  - n argocd
Enter fullscreen mode Exit fullscreen mode

Step 2: Add Helm Repository

helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
Enter fullscreen mode Exit fullscreen mode

Step 3: Configure ArgoCD Values

Create a comprehensive values file for high availability and security: argocd-values.yaml

# argocd-values.yaml
global:
  domain: argocd.yourdomain.com

# API Server Configuration
server:
  replicas: 3  # High availability

  service:
    type: ClusterIP

  ingress:
    enabled: true
    ingressClassName: nginx

    hosts:
      - argocd.yourdomain.com

    tls:
      - secretName: wildcard-tls
        hosts:
          - argocd.yourdomain.com

    annotations:
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"

  resources:
    requests:
      cpu: 500m
      memory: 512Mi
    limits:
      cpu: 2
      memory: 2Gi

# Application Controller
controller:
  replicas: 3  # High availability
  resources:
    requests:
      cpu: 500m
      memory: 512Mi
    limits:
      cpu: 2
      memory: 2Gi

# Repository Server
repoServer:
  replicas: 3  # High availability
  resources:
    requests:
      cpu: 300m
      memory: 512Mi
    limits:
      cpu: 1
      memory: 1Gi

# Redis for caching
redis:
  enabled: true
  cluster:
    enabled: false

# Disable Dex (using admin user for simplicity)
dex:
  enabled: false

# Configuration
configs:
  cm:
    admin.enabled: true
    url: https://argocd.yourdomain.com

  rbac:
    policy.csv: |
      g, system:cluster-admins, role:admin
Enter fullscreen mode Exit fullscreen mode

Step 4: Install ArgoCD

helm upgrade --install argocd argo/argo-cd \
  -n argocd \
  -f argocd-values.yaml
Enter fullscreen mode Exit fullscreen mode

Wait for all pods to be ready:

kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=argocd-server -n argocd --timeout=300s
Enter fullscreen mode Exit fullscreen mode

Step 5: Access ArgoCD

Get the initial admin password:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Enter fullscreen mode Exit fullscreen mode

Access the UI at https://argocd.yourdomain.com and log in with:

  • Username: admin
  • Password: (from the command above)

Important: Change the admin password immediately after first login.

Part 2: Adding Workload Clusters

Now let's add other clusters to ArgoCD. You'll need to repeat these steps for each cluster you want to manage.

Step 1: Create ArgoCD Namespace in Workload Cluster

Please join the workload cluster using SSH or your preferred method

Create the namespace:

kubectl create namespace argocd
Enter fullscreen mode Exit fullscreen mode

Step 2: Create Service Account and RBAC

This service account allows ArgoCD to manage resources in the workload cluster:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: argocd-manager
  namespace: argocd
---
apiVersion: v1
kind: Secret
metadata:
  name: argocd-manager-token
  namespace: argocd
  annotations:
    kubernetes.io/service-account.name: argocd-manager
type: kubernetes.io/service-account-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: argocd-manager-role
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: argocd-manager-role-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: argocd-manager-role
subjects:
- kind: ServiceAccount
  name: argocd-manager
  namespace: argocd
EOF
Enter fullscreen mode Exit fullscreen mode

Security Note: The above RBAC gives full cluster access. In production, you should limit permissions to specific namespaces and resources based on your security requirements.

Step 3: Extract Cluster Credentials

Retrieve the service account token, CA certificate, and API server address:

# Get the service account token
TOKEN=$(kubectl get secret argocd-manager-token -n argocd -o jsonpath='{.data.token}' | base64 -d)

# Try to get the cluster CA certificate from ca.crt
CA_CERT=$(kubectl get secret argocd-manager-token -n argocd -o jsonpath='{.data.ca\.crt}')

# Fallback to tls.crt if ca.crt is empty
if [ -z "$CA_CERT" ]; then
  CA_CERT=$(kubectl get secret argocd-manager-token -n argocd -o jsonpath='{.data.tls\.crt}')
fi

# Get the API server address (adjust to your cluster's master node IP). For an example: 192.168.1.123
API_SERVER="https://YOUR_CLUSTER_MASTER_NODE_IP:6443"

# Display values for verification
echo "Token: $TOKEN"
echo "CA Cert: $CA_CERT"
echo "API Server: $API_SERVER"
Enter fullscreen mode Exit fullscreen mode

Important: The API_SERVER should be the cluster's MASTER NODE IP or VIP (if you have multiple master node) or load balancer address that ArgoCD can reach from the management cluster.

Step 4: Register Cluster in Management Cluster

Switch back to the management cluster:

Create a cluster secret in ArgoCD. Replace the placeholder values with the actual values from Step 3:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: cluster-dev  # Use descriptive names: cluster-dev, cluster-staging, cluster-prod
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
  name: dev-cluster  # Display name in ArgoCD UI
  server: "$API_SERVER"
  config: |
    {
      "bearerToken": "$TOKEN",
      "tlsClientConfig": {
        "insecure": false,
        "caData": "$CA_CERT"
      }
    }
EOF
Enter fullscreen mode Exit fullscreen mode

Step 5: Verify Cluster Registration

Check that the cluster appears in ArgoCD:

# Using kubectl
kubectl get secrets -n argocd -l argocd.argoproj.io/secret-type=cluster

# Or via ArgoCD CLI (if installed)
argocd cluster list

# Or check the UI
# Navigate to Settings > Clusters in the ArgoCD web interface
Enter fullscreen mode Exit fullscreen mode

Step 6: Repeat for Additional Clusters

Repeat Steps 1-5 for each additional cluster (staging, production, etc.), using different secret names like cluster-staging, cluster-prod, etc.

Part 3: Deploying Applications

Now that your clusters are registered, you can deploy applications across them.

Example: Deploy to Specific Cluster

Create an Application manifest:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app-dev
  namespace: argocd
spec:
  project: default

  source:
    repoURL: https://github.com/your-org/your-repo
    targetRevision: HEAD
    path: apps/my-app

  destination:
    server: https://YOUR_CLUSTER_MASTER_NODE_IP:6443  # Dev cluster API server
    namespace: my-app

  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl apply -f my-app-dev.yaml
Enter fullscreen mode Exit fullscreen mode

Best Practices

Security

  1. Limit RBAC Permissions: Use namespace-scoped roles instead of cluster-admin where possible
  2. Rotate Tokens: Regularly rotate service account tokens
  3. Use Private Repos: Store manifests in private Git repositories with SSH keys or access tokens
  4. Enable SSO: Configure Dex with your identity provider for production
  5. Network Policies: Restrict network access between clusters

High Availability

  1. Run Multiple Replicas: Use at least 3 replicas for ArgoCD components
  2. Resource Limits: Set appropriate CPU and memory limits
  3. PostgreSQL Replication: Enable database replication for persistence
  4. Backup ArgoCD: Regularly backup the PostgreSQL database and cluster secrets

Operations

  1. Monitor Health: Set up alerts for application sync failures
  2. Use Projects: Organize applications into ArgoCD projects for better access control
  3. Implement Progressive Delivery: Use Argo Rollouts for canary and blue-green deployments
  4. Tag Releases: Use Git tags for production deployments instead of branches
  5. Documentation: Document cluster naming conventions and deployment workflows

Scaling

  1. Resource Tuning: Adjust replica counts based on the number of applications
  2. Sharding: Use application controller sharding for 1000+ applications
  3. Cache Optimization: Configure Redis appropriately for large deployments
  4. Cluster Separation: Consider separate ArgoCD instances for dev and prod environments

Troubleshooting

Cluster Connection Issues

If ArgoCD cannot connect to a workload cluster:

# Test connectivity from management cluster
kubectl run -it --rm debug --image=alpine --restart=Never -- sh
apk add curl
curl -k https://<cluster-api-server>:6443
Enter fullscreen mode Exit fullscreen mode

Check if the API server address is reachable from the management cluster and verify firewall rules.

Application Sync Failures

Check application status:

kubectl get application -n argocd
kubectl describe application <app-name> -n argocd
Enter fullscreen mode Exit fullscreen mode

View detailed sync errors in the ArgoCD UI under the application's sync status.

Performance Issues

If ArgoCD is slow:

  1. Increase replica counts
  2. Add more resources to controllers
  3. Enable Redis cluster mode
  4. Implement application controller sharding

Conclusion

Multi-cluster ArgoCD provides a powerful, centralized approach to managing Kubernetes deployments across your entire infrastructure. By installing ArgoCD once and connecting multiple clusters, you achieve:

  • Consistent deployment workflows across all environments
  • Reduced operational overhead and costs
  • Centralized visibility and control
  • Simplified disaster recovery
  • Better resource utilization

This architecture scales from small teams with a few clusters to enterprises managing hundreds of clusters across multiple regions. Start with the basic setup described in this guide, then gradually adopt advanced features like ApplicationSets, Argo Rollouts, and progressive delivery as your needs grow.

The key to success is maintaining Git as your single source of truth and leveraging ArgoCD's declarative approach to ensure your desired state matches your actual state across all clusters.

Additional Resources

Top comments (0)