DEV Community

Muhammad Ahmad Khan
Muhammad Ahmad Khan

Posted on

Integrating Multiple EKS Clusters with ArgoCD for Simplifying Kubernetes Operations

In the rapidly evolving landscape of cloud-native technologies, Kubernetes has emerged as the de facto standard for container orchestration. As organizations scale their infrastructure, managing multiple Kubernetes clusters becomes inevitable. With this growth comes the challenge of ensuring consistency, reliability, and efficiency across all clusters. Enter ArgoCD, a powerful tool for continuous delivery and GitOps workflows in Kubernetes. In this blog post, we'll explore why integrating multiple clusters in ArgoCD is essential and how we can integrate multiple AWS EKS clusters in ArgoCD.

Why Integration is Required

  • Centralized Management: Managing multiple Kubernetes clusters manually can be daunting and error-prone. Integrating them with ArgoCD provides a centralized platform for managing and deploying applications across all clusters, streamlining operations and reducing complexity.
  • Consistency and Standardization: Different clusters may have varying configurations, making it difficult to maintain consistency in deployments. ArgoCD ensures that configurations and deployments are standardized across all clusters, promoting best practices and ensuring uniformity.
  • Scalability: As organizations grow, they often adopt a multi-cluster strategy to distribute workloads and improve fault tolerance. Integrating these clusters with ArgoCD enables seamless scaling of applications across clusters, allowing organizations to leverage resources efficiently.
  • Monitoring [Health Checks and Logging]: ArgoCD offers insights into the deployment status and health of applications across clusters through its user interface and API. By integrating multiple clusters, organizations gain centralized visibility and can monitor the status of applications across all clusters from a single dashboard.

Image description

How to Integrate Multiple AWS EKS Clusters in ArgoCD

Let's say we have AWS accounts as follows:

  • Account A with account id: 111111111111
  • Account B with account id: 222222222222
  • Account C with account id: 333333333333

Account A is where ArgoCD runs.

To authenticate and access the external cluster we need to add the configuration as follows:

In Account A:

  • Create an IAM role named argocd-manager.
  • Create a role policy named argocd-role-policy and attach it to a role named argocd-manager having the assume role policy given below

RolePolicyDocument

cat >argocd-role-policy.json <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "*"
        }
    ]
}
EOF
Enter fullscreen mode Exit fullscreen mode

AssumeRolePolicyDocument

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::111111111111:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": [ "system:serviceaccount:argocd:argocd-server", "system:serviceaccount:argocd:argocd-application-controller" ]
                    "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
                }
            }
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Now In Account B:

  • Create an IAM role named deployer having trust relationship as follows:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::11111111111:role/argocd-manager"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode
  • Map this role in aws auth-config configmap Kubernetes object in Account B EKS cluster
kubectl edit -n kube-system configmap/aws-auth


# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::222222222222:role/my-role
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - system:masters
      rolearn: arn:aws:iam::222222222222:role/deployer # deployer role arn
      username: deployer
  mapUsers: |
    - groups:
      - system:masters
      userarn: arn:aws:iam::222222222222:user/admin
      username: admin
    - groups:
      - system:masters      
      userarn: arn:aws:iam::222222222222:user/alpha-user
      username: my-user
Enter fullscreen mode Exit fullscreen mode

Follow the same procedure in Account C as we have followed in Account B.

In Account A (where argocd is installed), add the following configuration in argocd helm chart values

Note: IAM role deployer must be created first in Account B or C

  • arn:aws:iam::222222222222:role/deployer
  • arn:aws:iam::333333333333:role/deployer
global:
  securityContext: # Set deployments securityContext/fsGroup to 999 so that the user of the docker image can use IAM Authenticator. We need this because the IAM Authenticator will try to mount a secret on /var/run/secrets/eks.amazonaws.com/serviceaccount/token. If the correct fsGroup (999 corresponds to the argocd user) isn’t set, this will fail.
    runAsGroup: 999
    fsGroup: 999

controller:
  serviceAccount:
    create: true
    name: argocd-application-controller
    annotations: {eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/argocd-manager} # Account A - IAM role service account
    automountServiceAccountToken: true

server:
  serviceAccount:
    create: true
    name: argocd-server
    annotations: {eks.amazonaws.com/role-arn: arn:aws:iam::111111111111:role/argocd-manager} # Account A - IAM role service account
    automountServiceAccountToken: true

 configs:
  # -- Provide one or multiple [external cluster credentials]
  # @default -- `[]` (See [values.yaml])
  ## Ref:
  ## - https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#clusters
  ## - https://argo-cd.readthedocs.io/en/stable/operator-manual/security/#external-cluster-credentials
  ## - https://argo-cd.readthedocs.io/en/stable/user-guide/projects/#project-scoped-repositories-and-clusters
  clusterCredentials:
    - name: development
      server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.abc.region.eks.amazonaws.com # EKS cluster API server endpoint of Account B
      config:
        awsAuthConfig:
          clusterName: eks-development
          roleARN: arn:aws:iam::222222222222:role/deployer # Deployer role arn of Account B
        tlsClientConfig:
          # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
          caData: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx........==" # EKS cluster certificate authority
    - name: staging
      server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.abc.region.eks.amazonaws.com # EKS cluster API server endpoint of Account C
      config:
        awsAuthConfig:
          clusterName: eks-staging
          roleARN: arn:aws:iam::333333333333:role/deployer # Deployer role arn of Account C
        tlsClientConfig:
          # Base64 encoded PEM-encoded bytes (typically read from a client certificate file).
          caData: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx........==" # EKS cluster certificate authority
Enter fullscreen mode Exit fullscreen mode

Obtain the EKS certificate of the respective cluster using AWS CLI:

aws eks describe-cluster \
        --region=${AWS_DEFAULT_REGION} \
        --name=${CLUSTER_NAME} \
        --output=text \
        --query 'cluster.{certificateAuthorityData: certificateAuthority.data}' | base64 -D
Enter fullscreen mode Exit fullscreen mode

The important thing to note is that we need to set deployments securityContext/fsGroup to 999 so that the user of the docker image can use IAM Authenticator. We need this because the IAM Authenticator will try to mount a secret on /var/run/secrets/eks.amazonaws.com/serviceaccount/token. If the correct fsGroup (999 corresponds to the argocd user) isn’t set, this will fail.

Top comments (0)