Argo CD and AWS CodeConnections: The Upside, the Redeploy Pain, and How I Fixed It
I run Argo CD on Amazon EKS using the managed Argo CD capability and AWS CodeConnections for Git. CodeConnections has been a clear win for day-to-day operations. Then I had to recreate the connection (new resource, new identity in the URL). Every Application went to Sync: Unknown until I updated URLs in two places—Git and the live cluster—and fixed ApplicationSets so they stopped writing the old URL back. This article leads with why I still choose CodeConnections, then what breaks on redeploy, then what I did when it inevitably happened, in that order.
1. Why CodeConnections is worth it for Argo CD on EKS
No SSH keys or personal tokens in the cluster. Argo pulls Git using IAM: the capability role is allowed to use the connection (UseConnection, GetConnection). You are not copying PATs into Secrets or rotating leaked keys because someone printed kubectl get secret.
One connection, many repos. The HTTPS URL includes your account, region, a connection UUID, and then owner/repo. Same connection, different path segment for each repository. Setup details and Terraform patterns are in Argo CD on EKS: Git repo access with AWS CodeConnections and Terraform.
Fits how enterprises already govern access. Connections are AWS resources; approval and auditing live next to the rest of your cloud controls.
That does not mean zero tradeoffs. Managed Argo CD often applies one Git credential broadly (for example every github.com fetch through Kustomize can inherit it). If that bites you, vendoring or URL strategy fixes it—see Why Your Kustomize Remote Bases Break on Managed Argo CD (and How to Fix It). The tradeoff this article focuses on is different: replacing the connection.
2. What actually hurts when you update or redeploy the connection
The Git URL Argo uses is not abstract. It embeds a connection UUID in the path. A new connection is a new UUID. Anything that still points at the old path keeps asking AWS to authorize the wrong resource, so repo fetch fails and sync never reconciles.
Pain point one — Git only is not enough. Your GitOps repo is the source of truth, but Kubernetes already has Application and ApplicationSet objects applied yesterday. Their spec.source.repoURL (and generator URLs on sets) stay on the old string until something updates them. Pushing Git does not retroactively patch those CRs.
Pain point two — ApplicationSets fight you. Many sets declare repoURL twice: on the git generator and again on the template. If you patch child Applications but leave the set on the old URL, the controller reconciles and puts the old URL back on the apps.
Pain point three — Terraform layout. If CodeConnections and the EKS cluster share one tangled module and state, recreating the connection can feel like you are planning half the platform when you only wanted a new Git pipe. I now prefer the connection (and its IAM attachment) in something standalone, with outputs the cluster stack consumes—so the next rotation is a smaller blast radius.
What this is not. If Argo shows forbidden listing some API group (for example heavy CRD surfaces from controllers like ACK), that is usually cluster RBAC / EKS access policies for the Argo identity, not the CodeConnections URL. Fix that on its own; do not confuse it with a UUID swap.
Do not mass-delete Applications to “fix” a bad URL. Workloads may still be fine; you risk prune tearing down real resources. Fix the URLs.
3. Prerequisites
-
kubectlagainst the cluster, with permission to read and patchapplicationsandapplicationsetsin the Argo CD namespace (below I useargocd, the usual default) - Rights to commit and push every Git repo that hardcodes the CodeConnections URL
- The new clone URL or at least the new UUID from the AWS Console or Terraform output
4. If it happens to you: what worked for me
These steps assume you already know the old and new connection UUID (search your shell history, Terraform state, or an old Application YAML). The host pattern is documented in the CodeConnections and EKS guides linked at the end.
Step A — Fix Git first, everywhere
Search across all repos that participate in GitOps—not only the “main” infra repo:
rg 'codeconnections\.' --glob '*.yaml' --glob '*.yml' --glob '*.tf' --glob '*.md'
Update bootstrap Application manifests, every ApplicationSet generator and template repoURL, any child Application checked in with a literal URL, and docs or scripts that build repository secrets. Commit and push.
Step B — Patch Applications in the cluster
Still in a broken state, the cluster cannot always sync from Git, so you patch live objects once. Set shell variables to your real values (namespace if not argocd, old UUID, new UUID):
NS=argocd
OLD_UUID=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
NEW_UUID=ffffffff-eeee-dddd-cccc-bbbbbbbbbbbb
List apps whose single spec.source.repoURL still contains the old UUID:
kubectl get applications -n "$NS" -o json | jq -r --arg u "$OLD_UUID" \
'.items[] | select(.spec.source.repoURL // "" | contains($u)) | .metadata.name'
For each name, the safest approach is to take the existing URL from the object and swap the UUID in the shell, then merge-patch:
app=my-app
oldurl=$(kubectl get application "$app" -n "$NS" -o jsonpath='{.spec.source.repoURL}')
newurl="${oldurl//$OLD_UUID/$NEW_UUID}"
kubectl patch application "$app" -n "$NS" --type merge -p "{\"spec\":{\"source\":{\"repoURL\":\"$newurl\"}}}"
If you use spec.sources (multi-source), repeat the idea per entry that points at CodeConnections.
Step C — Patch ApplicationSets (do not skip this)
Confirm the old UUID still appears on spec (ignore status for a moment):
kubectl get applicationsets -n "$NS" -o yaml | rg "$OLD_UUID"
To replace the UUID everywhere under each set’s JSON (review before you run this in production—I exported one set first and eyeballed it):
for name in $(kubectl get applicationsets -n "$NS" -o json | jq -r --arg u "$OLD_UUID" \
'.items[] | select(.. | strings? | contains($u)) | .metadata.name' | sort -u); do
kubectl get applicationset "$name" -n "$NS" -o json | \
jq --arg o "$OLD_UUID" --arg n "$NEW_UUID" \
'del(.status) | walk(if type == "string" then gsub($o; $n) else . end)' | \
kubectl replace -f -
done
The walk replaces every occurrence of the old UUID string in that JSON. Pick a UUID substring unique to the connection so you do not hit unrelated fields.
Step D — Refresh and converge
Hard refresh apps in the UI or CLI, then let GitOps catch up. If Git still cannot be fetched until one app is fixed, patch your bootstrap Application (the one that applies the rest of the tree) to the new URL first, then repeat the rest—classic chicken-and-egg.
5. Troubleshooting
Application stuck deleting
kubectl patch application my-app -n argocd -p '{"metadata":{"finalizers":null}}' --type=merge
I deleted an app and it came back
An ApplicationSet owns it (ownerReferences). Fix the set and Git; deleting the app alone will not stick.
Top comments (0)