DEV Community

Python-T Point
Python-T Point

Posted on • Originally published at pythontpoint.in

🚀 argocd install kubernetes cluster aws — common mistakes and how to avoid them

💥 The First Time I Tried ArgoCD on EKS, I Broke Production at 2 AM

argocd install kubernetes cluster aws

Chai in hand. Midnight deploy. Everything was green.

📑 Table of Contents

  • 💥 The First Time I Tried ArgoCD on EKS, I Broke Production at 2 AM
  • ☁️ Prerequisites — What You Actually Need
  • 🚀 Installing ArgoCD — The Right Way
  • 📦 Using Helm to Deploy ArgoCD
  • 🔧 Exposing ArgoCD — Ingress or LoadBalancer?
  • 🔐 Securing Access — Because 'admin' Shouldn’t Be the Password
  • 🧠 First Sync — Your Cluster Meets Git
  • 🟩 Final Thoughts
  • ❓ Frequently Asked Questions
  • Can I use Terraform to install ArgoCD on EKS?
  • Is ArgoCD free to use on AWS EKS?
  • How do I upgrade ArgoCD after installing it on EKS?

And then — boom. Staging goes sideways. Deployments rolling back on their own. Pods evicting. Slack pinging like a war siren.

I remember the time I ran helm install argo-cd without reading the IAM implications. Rookie move. Thought it was just another kubectl apply fest. Spoiler: it’s not.

Yeah, I learned this the hard way — ArgoCD doesn’t just connect to your cluster. It needs permission to watch , modify , and reconcile. And if your IAM roles aren’t tight? Your cluster starts doing interpretive dance.

That night, I’d forgotten two things: proper IRSA setup and Ingress config. And because EKS uses control plane logging that’s… well, not exactly verbose — tracking the root cause took me till 2:37 AM. (Fun fact: the error trace had something about AccessDenied: User: anonymous — classic.)

But look — you’re here because you’re tired. Tired of copy-pasting kubectl commands. Tired of “who pushed what?” Slack threads. You want GitOps. You want ArgoCD. And you want it right on EKS — no fire drills.

Honestly? I’ve been there. Let me walk you through this the way I wish someone had shown me. No fluff. Just what works.


☁️ Prerequisites — What You Actually Need

Here’s the thing: GitOps fails before it starts if your foundation is shaky.

I once watched a junior I was mentoring spend six hours debugging ArgoCD sync issues — only to realize helm was pointing at a stale kubeconfig. The cluster name was prod-eu-west, but he was connected to dev-ap-south-1. (Hint: kubectl config current-context saves lives.)

So before you even think about helm install, double-check this list. Not theoretical. Real-talk essentials:

  • A running EKS cluster (1.24 or later — yes, version matters)
  • kubectl configured and pointing to your EKS cluster (run kubectl cluster-info — trust me)
  • Helm installed locally (v3+, and I’d recommend 3.12+)
  • AWS CLI with IAM perms — at least: eks:DescribeCluster, ec2:Describe*, iam:PassRole
  • Your kubeconfig synced: aws eks update-kubeconfig --name your-cluster --region ap-south-1

And — don’t laugh — a namespace. Just one.

kubectl create namespace argocd
Enter fullscreen mode Exit fullscreen mode

Why bother? Because I once deployed ArgoCD into default during a Friday release (rational: “it’s just a test”). But then we added Helm hooks, sidecars, RBACs — and suddenly, kubectl get all returned 80 lines. Chaos. Namespace isolation isn’t optional. It’s breathing room.

Like this. Keep things clean.


🚀 Installing ArgoCD — The Right Way

Alright. Let’s do this.

Now, there are a million ways to install ArgoCD — raw YAML, Terraform (more on that later), Kustomize, even manual CRD implants (don’t). But what’s worked for me on real projects? Helm.

Why? Because Helm gives me version pinning, values overrides, and — critically — rollback. Last thing you want is to be kubectl apply-ing a broken operator and yelling “why is the repo server crashing?!”

📦 Using Helm to Deploy ArgoCD

First, add the official Argo Helm repo:

helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
Enter fullscreen mode Exit fullscreen mode

Then — and this part matters — install with CRDs. Not as a separate step. Inline. Because nothing sucks more than hitting a CustomResourceDefinition not found during sync.

helm upgrade --install argocd argo/argo-cd \
  --namespace argocd \
  --version 5.10.0 \
  --set installCRDs=true
Enter fullscreen mode Exit fullscreen mode

Let’s break that down — it’s not just syntax.

  • upgrade --install: Makes it idempotent. Safe for CI. No “is it there?” checks needed.
  • --version 5.10.0: Pin your version. Don’t use latest — that’s how you wake up to broken CRDs after a weekend.
  • installCRDs=true: Ensures the Application, AppProject, and Cluster CRDs go live before ArgoCD tries using them.

So now — step back. Grab coffee. Wait 90 seconds.

Then check the pods:

kubectl get pods -n argocd
Enter fullscreen mode Exit fullscreen mode

You’ll see something like this: (Also read: 🚀 How to deploy flask app kubernetes helm — the right way)

NAME                                     READY   STATUS    RESTARTS   AGE
argocd-application-controller-0          1/1     Running   0          90s
argocd-dex-server-7c89d9665c-2x9z1       1/1     Running   0          90s
argocd-redis-5d4df8f4b-8s2k3             1/1     Running   0          90s
argocd-repo-server-84f8f5b59-2p3l1       1/1     Running   0          90s
argocd-server-7f6f8d9c84-k2m3n           1/1     Running   0          90s
Enter fullscreen mode Exit fullscreen mode

All green? Good.

Any CrashLoopBackOff? Probably a permission issue. Check the logs: kubectl logs -n argocd. 90% of the time, it’s either RBAC or IRSA.

🔧 Exposing ArgoCD — Ingress or LoadBalancer?

ArgoCD installs by default with a ClusterIP service. Which means — no outside access. (More onPythonTPoint tutorials)

Now you’ve got two paths:

  1. LoadBalancer : Fast. Gets you up in 60 seconds. Perfect for PoCs.
  2. Ingress : Cleaner for production. Pairs with ALB, supports TLS, integrates with your domain.

For now, let’s go quick. We’ll patch the service — this is what I do on day one of a POC.

kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Enter fullscreen mode Exit fullscreen mode

Wait a sec. Then:

kubectl get svc argocd-server -n argocd
Enter fullscreen mode Exit fullscreen mode

You’ll get output like: (Also read: 🚀 ansible install nginx ubuntu — common mistakes and how to fix them)

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)
argocd-server   LoadBalancer   10.100.20.150   54.123.45.67    80:30014/TCP,443:30015/TCP
Enter fullscreen mode Exit fullscreen mode

Bingo. That EXTERNAL-IP? That’s your dashboard. Open it in the browser — but don’t log in yet.

"GitOps isn’t about tools — it’s about making your cluster a reflection of your repo. ArgoCD is just the mirror."


🔐 Securing Access — Because 'admin' Shouldn’t Be the Password

Here’s a fun detail: ArgoCD sets the default admin password to the pod name of argocd-server. No joke.

Yes — if your pod is named argocd-server-abc123, then the password is abc123. I’m not making this up. (This feels like leaving your bank PIN on a sticky note — but hey, it works for bootstrapping.)

So first — get that initial password:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Enter fullscreen mode Exit fullscreen mode

Now, log in via CLI:

argocd login 54.123.45.67 --username admin --password <password>
Enter fullscreen mode Exit fullscreen mode

And then — lock it down. Fast.

Because on one project (team of 8, microservices everywhere), a dev left the dashboard open during lunch. Another junior — curious — clicked “Delete” on the prod-orders-app Application. Poof. Sync broke. Chaos.

So in production, do this: (Also read: 🐧 AWS EC2 SSH permission denied fix Ubuntu — common mistakes and how to resolve them)

  • Set up OIDC — AWS Cognito, Google Workspace, GitHub Auth — anything that ties to your SSO
  • Rotate the default password ASAP. Use argocd account update-password.
  • Use ArgoCD RBAC to restrict access — devs don’t need cluster-admin, okay?
  • Enable TLS. Even if you’re behind an ALB. (Yes, I’ve seen MITM attacks in internal VPCs — scary.)

Pro tip: commit your ArgoCD config (including RBAC policies) to Git. Because if your ArgoCD instance dies — you want it to rebuild itself. Just like everything else.

(Like this — full circle GitOps. Kinda beautiful, right?)


🧠 First Sync — Your Cluster Meets Git

Alright — time for the magic.

ArgoCD’s entire reason to exist? Sync your cluster state with a Git repo.

So let’s test it with something simple — the classic guestbook.

Make a file: guestbook-app.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
spec:
  project: default
  source:
    repoURL: 'https://github.com/argoproj/argocd-example-apps.git'
    path: guestbook
    targetRevision: HEAD
  destination:
    server: 'https://kubernetes.default.svc'
    namespace: guestbook
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
Enter fullscreen mode Exit fullscreen mode

Apply it:

kubectl apply -f guestbook-app.yaml
Enter fullscreen mode Exit fullscreen mode

Now go to https://54.123.45.67 — log in — and watch the show.

Within seconds: ArgoCD spins up a guestbook namespace. Deploys Redis. Fires up the frontend. All from a Git repo.

Now — delete the frontend deployment manually:

kubectl delete deploy guestbook-ui -n guestbook
Enter fullscreen mode Exit fullscreen mode

Wait 30 seconds. Refresh the ArgoCD UI.

Boom — it’s back.

That’s self-healing. That’s GitOps.

This is where the real power of argocd install kubernetes cluster aws clicks. Not just deployment — state enforcement. No more drift. No more “works on my machine”. The repo is the source of truth.

And if someone messes with prod? ArgoCD notices. And fixes it.

Peace of mind — automated.


🟩 Final Thoughts

I used to think GitOps was overengineering. Especially for small teams.

Then I joined a 4-person startup. Three environments. Five services. Manual kubectl apply on merge. Rollbacks? “Let’s dig through Slack and pray.”

ArgoCD changed that. In two weeks, we had everything — infra, services, configs — versioned in Git. Sync automated. Self-healing enabled.

Was the first argocd install kubernetes cluster aws a pain? A little. But six months in? We haven’t had a single “who changed what?” incident.

So yeah — it’s not just for enterprises. It’s for anyone who’s ever panicked at 2 AM because a config got overwritten.

And honestly? That post-midnight chai tastes way better when you’re not debugging your own mistakes.


❓ Frequently Asked Questions

Can I use Terraform to install ArgoCD on EKS?

Yes, absolutely. I use the Helm provider in Terraform on my current project — cluster bootstrapping, ArgoCD install, and even syncing the first app — all in one terraform apply. Fully version-controlled. No manual steps.

Is ArgoCD free to use on AWS EKS?

Yep — completely open-source. No licensing. You only pay for the AWS resources: nodes, ALB, storage. The ArgoCD control plane runs on your EKS cluster — so it’s already in your bill.

How do I upgrade ArgoCD after installing it on EKS?

Stick to Helm: helm upgrade argocd argo/argo-cd --namespace argocd --version [new-version]. But — and this is important — always test in staging. I’ve seen CRD changes break sync controllers. (On one project, v2.7 to v2.8 nuked our app projects — thank god for backups.)

Top comments (0)