Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. In this blog post we will see how to install, configure and manage ArgoCD on Amazon Elastic Container Service for Kubernetes (Amazon EKS).
Prerequisites
- Installing and configuring AWS CLI
- eksctl
- Kubectl
- ArgoCD CLI
- Create a public hosted zone in Route 53. See tutorial
- Request a public certificate with AWS Certificate Manager. See tutorial
EKS configuration
We start by creating the EKS cluster.
export AWS_PROFILE=<AWS_PROFILE>
export AWS_REGION=eu-west-1
export EKS_CLUSTER_NAME=devops
export EKS_VERSION=1.19
eksctl create cluster \
--name $EKS_CLUSTER_NAME \
--version $EKS_VERSION \
--region $AWS_REGION \
--managed
ArgoCD installation
Before installing ArgoCD in Kubernetes, we need to authenticate on Amazon EKS:
aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
Install ArgoCD using the official yaml
$ kubectl create namespace argocd
$ kubectl config set-context --current --namespace=argocd
$ kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
ArgoCD configuration
Associate the public certificate that you created earlier to the ArgoCD server.
cat > argocd-server.patch.yaml << EOF
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<ACM_ARGOCD_ARN>"
spec:
type: LoadBalancer
loadBalancerSourceRanges:
- "<LOCAL_IP_RANGES>"
EOF
ACM_ARGOCD_ARN=<ACM_ARGOCD_ARN>
sed -i "s,<ACM_ARGOCD_ARN>,${ACM_ARGOCD_ARN},g; s/<LOCAL_IP_RANGES>/$(curl -s http://checkip.amazonaws.com/)\/32/g; " argocd-server.patch.yaml
$ kubectl patch svc argocd-server -p "$(cat argocd-server.patch.yaml)"
Create a Record Set in your hosted zone that you created earlier. The CNAME record points to the ingress hostname of the ArgoCD server.
PUBLIC_DNS_NAME=<PUBLIC_DNS_NAME>
R53_HOSTED_ZONE_ID=<R53_HOSTED_ZONE_ID>
cat > argocd-recordset.json << EOF
{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "argocd.${PUBLIC_DNS_NAME}.",
"Type": "CNAME",
"TTL": 300,
"ResourceRecords": [{ "Value": "$(kubectl get services argocd-server --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')"}]
}}]
}
EOF
aws route53 change-resource-record-sets --hosted-zone-id $R53_HOSTED_ZONE_ID --change-batch file://argocd-recordset.json
As the AWS Elastic Load Balancer is doing the SSL we need to start Argo CD in HTTP (insecure) mode by disabling the TLS. Edit the argocd-server
deployment to add the --insecure
flag to the argocd-server
command:
cat > argocd-deployment-server.patch.yaml << EOF
spec:
template:
spec:
containers:
- command:
- argocd-server
- --staticassets
- /shared/app
- --insecure
name: argocd-server
EOF
$ kubectl patch deployment argocd-server -p "$(cat argocd-deployment-server.patch.yaml)"
Amazon EKS creates a Classic Load Balancer.
An AWS Application Load Balancer can be used as Load Balancer for both UI and gRPC traffic. See ArgoCD Ingress Configuration.
ArgoCD Web UI configuration
The initial admin password is autogenerated to be the pod name of the Argo CD API server. Let's change it.
Edit the argocd-secret
secret and update the admin.password
field with a new bcrypt hash. You can use a site like https://www.browserling.com/tools/bcrypt to generate a new hash.
ARGOCD_ADDR="argocd.${PUBLIC_DNS_NAME}"
$ kubectl patch secret argocd-secret \
-p '{"stringData": {
"admin.password": "<BCRYPT_HASH>",
"admin.passwordMtime": "'$(date +%FT%T%Z)'"
}}'
Log in now using the username admin and the new password.
argocd login $ARGOCD_ADDR
The admin user is a superuser
and it has unrestricted access to the system. ArgoCD recommends to not use the admin user in daily work.
Let's add two new users demo
and ci
:
- User
demo
will have read only access to the Web UI, - User
ci
will have write privileges and will be used to generate access tokens to executeargocd
commands in CI / CD pipelines.
If you have a Git repository, you can specify it in the repositories
attribute.
cat > argocd-configmap.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- url: <GIT_REPOSITORY_URL>
passwordSecret:
name: demo
key: password
usernameSecret:
name: demo
key: username
admin.enabled: "true"
accounts.demo.enabled: "true"
accounts.demo: login
accounts.ci.enabled: "true"
accounts.ci: apiKey
EOF
GIT_USERNAME=<GIT_USERNAME>
GIT_TOKEN=<GIT_TOKEN>
$ kubectl create secret generic demo \
--from-literal=username=$GIT_USERNAME \
--from-literal=password=$GIT_TOKEN
Let's create the users:
sed -i "s,<GIT_REPOSITORY_URL>,$GIT_REPOSITORY_URL,g" argocd-configmap.yaml
$ kubectl apply -f argocd-configmap.yaml
Add a password to the demo
user:
argocd account update-password --account demo --current-password "${ADMIN_PASSWORD}" --new-password "<DEMO_PASSWORD>"
We can now assign the roles to the users:
-
demo
user will have read only access, -
ci
user will manage projects, repositories, clusters and applications.
cat > argocd-rbac-configmap.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-rbac-cm
app.kubernetes.io/part-of: argocd
data:
policy.default: role:readonly
policy.csv: |
p, role:ci, applications, sync, *, allow
p, role:ci, applications, update, *, allow
p, role:ci, applications, override, *, allow
p, role:ci, applications, create, *, allow
p, role:ci, applications, get, *, allow
p, role:ci, applications, list, *, allow
p, role:ci, clusters, create, *, allow
p, role:ci, clusters, get, *, allow
p, role:ci, clusters, list, *, allow
p, role:ci, projects, create, *, allow
p, role:ci, projects, get, *, allow
p, role:ci, projects, list, *, allow
p, role:ci, repositories, create, *, allow
p, role:ci, repositories, get, *, allow
p, role:ci, repositories, list, *, allow
g, ci, role:ci
EOF
$ kubectl apply -f argocd-rbac-configmap.yaml
CI / CD integration
To create a token using the ci
user you can run the command:
AROGOCD_TOKEN=$(argocd account generate-token --account ci)
The token can be stored in AWS Secret Manager and used in a CI / CD pipeline::
aws secretsmanager create-secret --name argocd-token \
--description "ArgoCD Token" \
--secret-string "${AROGOCD_TOKEN}"
The following Gitlab example demonstrates the use of this token to create a cluster, a project, and synchronize an application in ArgoCD.
If you want to understand how an IAM role can be attached to a Gitlab runner, please refer to my previous post on Securing access to AWS IAM Roles from Gitlab CI
stages:
- init
- deploy
variables:
KUBECTL_VERSION: 1.20.5
ARGOCD_VERSION: 1.7.4
ARGOCD_ADDR: argocd.example.com
# Get ArgoCD credentials from Secret Manager
before_script:
- export AROGOCD_TOKEN="$(aws secretsmanager get-secret-value --secret-id argocd-token --version-stage AWSCURRENT --query SecretString --output text)"
# install kubectl
- curl -L "https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl" -o /usr/bin/kubectl
# install argocd
- curl -sSL -o /usr/local/bin/argocd "https://github.com/argoproj/argo-cd/releases/download/v${ARGOCD_VERSION}/argocd-linux-amd64"
init demo project 🔬:
stage: init
when: manual
image:
name: amazon/aws-cli
script:
- argocd cluster add $BUSINESS_K8S_CONTEXT --name business-cluster-dev --kubeconfig $KUBE_CONFIG --auth-token=${AROGOCD_TOKEN} --server ${ARGOCD_ADDR} || echo 'cluster already added'
tags:
- k8s-dev-runner
only:
- master
deploy demo project 🚀:
stage: init
when: manual
image:
name: amazon/aws-cli
script:
- sed -i "s,<KUBERNETES_CLUSTER_URL>,$BUSINESS_K8S_CLUSTER_URL,g;s,<GIT_REPOSITORY_URL>,$CI_PROJECT_URL.git,g" application.yaml
# Connect to aws eks devops cluster
- aws eks update-kubeconfig --region $AWS_REGION --name $EKS_CLUSTER_NAME
# Create ArgoCD project
- argocd proj create demo-dev -d $BUSINESS_K8S_CLUSTER_URL,app-dev -s $CI_PROJECT_URL.git --auth-token=${AROGOCD_TOKEN} --server ${ARGOCD_ADDR} || echo 'project already created'
# Create ArgoCD application
- kubectl apply -n argocd -f application.yaml
tags:
- k8s-dev-runner
only:
- master
deploy demo app 🌐:
stage: deploy
image:
name: amazon/aws-cli
script:
- cd envs/dev
- argocd app sync demo-dev --auth-token=${AROGOCD_TOKEN} --server ${ARGOCD_ADDR}
tags:
- k8s-dev-runner
only:
- tags
And here the configuration of the ArgoCD application:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: demo-dev
namespace: argocd
spec:
project: demo-dev
source:
repoURL: <GIT_REPOSITORY_URL>
targetRevision: HEAD
path: envs/dev
destination:
server: <KUBERNETES_CLUSTER_URL>
namespace: app-dev
Before running such a pipeline, the Gitlab runner must have access to the argoproj.io
API.
Create the RBAC:
cat - <<EOF | kubectl apply -f - --namespace "argocd"
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: $GITLAB_RUNNER_IAM_ROLE_NAME
namespace: argocd
rules:
- apiGroups: ["argoproj.io"]
resources: ["clusters", "projects", "applications", "repositories", "certificates", "accounts", "gpgkeys"]
verbs: ["get", "create", "update", "delete", "sync", "override", "action"]
EOF
cat - <<EOF | kubectl apply -f - --namespace "argocd"
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: $GITLAB_RUNNER_IAM_ROLE_NAME
namespace: argocd
subjects:
- kind: User
name: $GITLAB_RUNNER_IAM_ROLE_NAME
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: $GITLAB_RUNNER_IAM_ROLE_NAME
apiGroup: rbac.authorization.k8s.io
EOF
GITLAB_RUNNER_IAM_ROLE_NAME
is the name of the IAM role linked to the kubernetes service account attached to the runner.
Update the aws-auth
configmap:
$ kubectl get configmap -n kube-system aws-auth -o yaml > aws-auth.yaml
Complete the config map with the following role:
mapRoles: |
- rolearn: $GITLAB_RUNNER_IAM_ROLE_ARN
username: $GITLAB_RUNNER_IAM_ROLE_NAME
Apply the change
$ kubectl apply -f aws-auth.yaml
That's it!
Conclusion
In this blog post we configured the ArgoCD server, ArgoCD Web UI and we ended up integrating ArgoCD into a CI / CD tool.
Hope you enjoyed reading this blog post.
If you have any questions or feedback, please feel free to leave a comment.
Thanks for reading!
Top comments (5)
Thanks for the blog post!
We have an open-source tool called DevOps Stack which allows to:
devops-stack.io/docs/devops-stack/...
nice!
I struggle with creating a Record Set in my hosted zone.
The error message is:
And that might be because the output of the following command is empty:
What I might be doing wrong?
Hello
Thank your for your contribution
You are testing with the 1.19 version?
Good point. No, I've been using 1.21.