DEV Community

Cover image for GitOps with ArgoCD on Amazon EKS using Terraform: A Complete Implementation Guide

GitOps with ArgoCD on Amazon EKS using Terraform: A Complete Implementation Guide

In the rapidly evolving world of DevOps and cloud-native applications, GitOps has emerged as a revolutionary approach to continuous deployment and infrastructure management. This blog post explores how to implement a complete GitOps workflow using ArgoCD on Amazon Elastic Kubernetes Service (EKS), providing you with a production-ready setup that follows industry best practices.

GitOps represents a paradigm shift where Git repositories serve as the single source of truth for both application code and infrastructure configuration. By leveraging ArgoCD as our GitOps operator, we can achieve automated, reliable, and auditable deployments while maintaining the declarative nature of Kubernetes.

What is GitOps?

GitOps is a modern approach to continuous deployment that uses Git as the single source of truth for declarative infrastructure and applications. The core principles include:

  • Declarative Configuration: Everything is described declaratively in Git
  • Version Control: All changes are tracked and auditable
  • Automated Deployment: Changes in Git trigger automatic deployments
  • Continuous Monitoring: The system continuously ensures the desired state matches the actual state

Why ArgoCD?

ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes that offers:

  • Application Management: Centralized management of multiple applications
  • Multi-Cluster Support: Deploy to multiple Kubernetes clusters
  • Rich UI: Intuitive web interface for monitoring deployments
  • RBAC Integration: Fine-grained access control
  • Rollback Capabilities: Easy rollback to previous versions

Architecture Overview

Our implementation creates a robust, scalable architecture that includes:

┌──────────────────────────────────────────────────────────────┐
│                        AWS Cloud                             │
│  ┌──────────────────────────────────────────────────────────┐│
│  │                    VPC                                   ││
│  │  ┌─────────────────┐    ┌──────────────────────────────┐ ││
│  │  │  Public Subnets │    │      Private Subnets         │ ││
│  │  │                 │    │  ┌─────────────────────────┐ │ ││
│  │  │  ┌───────────┐  │    │  │      EKS Cluster        │ │ ││
│  │  │  │    NAT    │  │    │  │  ┌─────────────────────┐│ │ ││
│  │  │  │  Gateway  │  │    │  │  │   NGINX Ingress     ││ │ ││
│  │  │  └───────────┘  │    │  │  │    Controller       ││ │ ││
│  │  │                 │    │  │  └─────────────────────┘│ │ ││
│  │  └─────────────────┘    │  │  ┌─────────────────────┐│ │ ││
│  │                         │  │  │     ArgoCD          ││ │ ││
│  │                         │  │  │     Server          ││ │ ││
│  │                         │  │  └─────────────────────┘│ │ ││
│  │                         │  │  ┌─────────────────────┐│ │ ││
│  │                         │  │  │   Application       ││ │ ││
│  │                         │  │  │   Workloads         ││ │ ││
│  │                         │  │  └─────────────────────┘│ │ ││
│  │                         │  └─────────────────────────┘ │ ││
│  │                         └──────────────────────────────┘ ││
│  └──────────────────────────────────────────────────────────┘│
│                                                              │
│  ┌──────────────────────────────────────────────────────────┐│
│  │                   Route53                                ││
│  │  argocd.chinmayto.com → NGINX Ingress NLB                ││
│  │  app.chinmayto.com → NGINX Ingress NLB                   ││
│  └──────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

Prerequisites

Before starting, ensure you have:

  • AWS CLI configured with appropriate permissions
  • Terraform installed (version >= 1.0)
  • kubectl installed
  • A registered domain name in Route53
  • Helm installed (version >= 3.0)

Implementation Steps

Step 1: Create VPC and EKS Cluster

We start by creating the foundational infrastructure using AWS community Terraform modules. First, let's define our variables:

# infrastructure/variables.tf
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "cluster_name" {
  description = "Name of the EKS cluster"
  type        = string
  default     = "CT-EKS-Cluster"
}

variable "cluster_version" {
  description = "Kubernetes version for the EKS cluster"
  type        = string
  default     = "1.33"
}

variable "vpc_cidr" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "public_subnet_cidrs" {
  description = "CIDR blocks for public subnets"
  type        = list(string)
  default     = ["10.0.1.0/24", "10.0.2.0/24"]
}

variable "private_subnet_cidrs" {
  description = "CIDR blocks for private subnets"
  type        = list(string)
  default     = ["10.0.10.0/24", "10.0.20.0/24"]
}
Enter fullscreen mode Exit fullscreen mode

Now, let's create the VPC and EKS cluster:

# infrastructure/main.tf
####################################################################################
# Data source for availability zones
####################################################################################
data "aws_availability_zones" "available" {
  state = "available"
}

####################################################################################
### VPC Module Configuration
####################################################################################
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = "${var.cluster_name}-VPC"
  cidr = var.vpc_cidr

  azs             = slice(data.aws_availability_zones.available.names, 0, 2)
  private_subnets = var.private_subnet_cidrs
  public_subnets  = var.public_subnet_cidrs

  enable_nat_gateway = true
  enable_vpn_gateway = false
  single_nat_gateway = true

  enable_dns_hostnames = true
  enable_dns_support   = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = "1"
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = "1"
  }

  tags = {
    Name      = "${var.cluster_name}-VPC"
    Terraform = "true"
  }
}

####################################################################################
###  EKS Cluster Module Configuration
####################################################################################
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name    = var.cluster_name
  cluster_version = var.cluster_version

  vpc_id                                   = module.vpc.vpc_id
  subnet_ids                               = module.vpc.private_subnets
  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true

  # EKS Managed Node Groups
  eks_managed_node_groups = {
    EKS_Node_Group = {
      min_size     = 1
      max_size     = 3
      desired_size = 2

      instance_types = ["t3.medium"]
      capacity_type  = "ON_DEMAND"

      subnet_ids = module.vpc.private_subnets
    }
  }

  # EKS Add-ons
  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
    eks-pod-identity-agent = {
      most_recent = true
    }
  }

  tags = {
    Name      = var.cluster_name
    Terraform = "true"
  }
}

####################################################################################
###  Null Resource to update the kubeconfig file
####################################################################################
resource "null_resource" "update_kubeconfig" {
  provisioner "local-exec" {
    command = "aws eks --region ${var.aws_region} update-kubeconfig --name ${var.cluster_name}"
  }

  depends_on = [module.eks]
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Deploy NGINX Ingress Controller

The NGINX Ingress Controller provides external access to our services through an AWS Network Load Balancer:

# infrastructure/nginx-ingress.tf
################################################################################
# Create ingress-nginx namespace
################################################################################
resource "kubernetes_namespace" "ingress_nginx" {
  metadata {
    name = "ingress-nginx"
    labels = {
      name = "ingress-nginx"
    }
  }
  depends_on = [module.eks]
}

################################################################################
# Install NGINX Ingress Controller using Helm
################################################################################
resource "helm_release" "nginx_ingress" {
  name       = "ingress-nginx"
  repository = "https://kubernetes.github.io/ingress-nginx"
  chart      = "ingress-nginx"
  namespace  = kubernetes_namespace.ingress_nginx.metadata[0].name
  version    = "4.8.3"

  values = [
    yamlencode({
      controller = {
        service = {
          type = "LoadBalancer"
          annotations = {
            "service.beta.kubernetes.io/aws-load-balancer-type"                              = "nlb"
            "service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled" = "true"
          }
        }
        metrics = {
          enabled = true
          serviceMonitor = {
            enabled = false
          }
        }
      }
    })
  ]

  depends_on = [kubernetes_namespace.ingress_nginx]
}

################################################################################
# Get the NLB hostname from nginx ingress controller
################################################################################
data "kubernetes_service" "nginx_ingress_controller" {
  metadata {
    name      = "ingress-nginx-controller"
    namespace = kubernetes_namespace.ingress_nginx.metadata[0].name
  }
  depends_on = [helm_release.nginx_ingress]
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Deploy ArgoCD Server

First, let's define the ArgoCD-specific variables:

# infrastructure/variables.tf (ArgoCD Variables)
variable "argocd_namespace" {
  description = "Kubernetes namespace for ArgoCD"
  type        = string
  default     = "argocd"
}

variable "argocd_chart_version" {
  description = "ArgoCD Helm chart version"
  type        = string
  default     = "5.51.6"
}

variable "argocd_hostname" {
  description = "Hostname for ArgoCD ingress"
  type        = string
  default     = "argocd.chinmayto.com"
}

variable "argocd_admin_password" {
  description = "Custom admin password for ArgoCD (leave empty for auto-generated)"
  type        = string
  default     = ""
  sensitive   = true
}

variable "domain_name" {
  description = "Domain name for the hosted zone"
  type        = string
  default     = "chinmayto.com"
}

variable "argocd_subdomain" {
  description = "Subdomain for ArgoCD"
  type        = string
  default     = "argocd"
}
Enter fullscreen mode Exit fullscreen mode

Now, deploy ArgoCD using Helm with custom configuration for ingress access:

# infrastructure/argocd.tf
####################################################################################
### Route53 Hosted Zone
####################################################################################
data "aws_route53_zone" "main" {
  name         = var.domain_name
  private_zone = false
}

####################################################################################
### ArgoCD Namespace
####################################################################################
resource "kubernetes_namespace" "argocd" {
  metadata {
    name = var.argocd_namespace
  }

  depends_on = [module.eks]
}

####################################################################################
### ArgoCD Helm Release
####################################################################################
resource "helm_release" "argocd" {
  name       = "argocd"
  repository = "https://argoproj.github.io/argo-helm"
  chart      = "argo-cd"
  version    = var.argocd_chart_version
  namespace  = kubernetes_namespace.argocd.metadata[0].name

  values = [
    yamlencode({
      server = {
        service = {
          type = "ClusterIP"
        }
        ingress = {
          enabled          = true
          ingressClassName = "nginx"
          hosts            = [var.argocd_hostname]
          annotations = {
            "nginx.ingress.kubernetes.io/ssl-redirect"       = "false"
            "nginx.ingress.kubernetes.io/force-ssl-redirect" = "false"
            "nginx.ingress.kubernetes.io/backend-protocol"   = "HTTP"
          }
        }
      }
      configs = {
        params = {
          "server.insecure" = true
        }
      }
    })
  ]

  depends_on = [kubernetes_namespace.argocd]
}

####################################################################################
### ArgoCD Admin Password Secret (Optional - for custom password)
####################################################################################
resource "kubernetes_secret" "argocd_admin_password" {
  count = var.argocd_admin_password != "" ? 1 : 0

  metadata {
    name      = "argocd-initial-admin-secret"
    namespace = kubernetes_namespace.argocd.metadata[0].name
  }

  data = {
    password = bcrypt(var.argocd_admin_password)
  }

  depends_on = [kubernetes_namespace.argocd]
}
Enter fullscreen mode Exit fullscreen mode

Step 4: Configure Route53 DNS

Set up DNS records to access ArgoCD via a custom subdomain:

# infrastructure/argocd.tf (continued)
####################################################################################
### Route53 DNS Record for ArgoCD (pointing to NGINX Ingress NLB)
####################################################################################
resource "aws_route53_record" "argocd" {
  zone_id = data.aws_route53_zone.main.zone_id
  name    = var.argocd_subdomain
  type    = "A"

  alias {
    name                   = data.kubernetes_service.nginx_ingress_controller.status.0.load_balancer.0.ingress.0.hostname
    zone_id                = "Z26RNL4JYFTOTI" # NLB zone ID for us-east-1
    evaluate_target_health = true
  }

  depends_on = [helm_release.argocd, data.kubernetes_service.nginx_ingress_controller]
}
Enter fullscreen mode Exit fullscreen mode

For application DNS records, create a separate file:

# infrastructure/app-dns.tf
####################################################################################
### Route53 DNS Record for Node.js App (pointing to NGINX Ingress NLB)
####################################################################################
resource "aws_route53_record" "nodejs_app" {
  zone_id = data.aws_route53_zone.main.zone_id
  name    = var.app_subdomain
  type    = "A"

  alias {
    name                   = data.kubernetes_service.nginx_ingress_controller.status.0.load_balancer.0.ingress.0.hostname
    zone_id                = "Z26RNL4JYFTOTI" # NLB zone ID for us-east-1
    evaluate_target_health = true
  }

  depends_on = [helm_release.nginx_ingress]
}
Enter fullscreen mode Exit fullscreen mode

Step 5: Create ArgoCD Project

Define an ArgoCD project to manage application deployments:

# argocd/project.yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: chinmayto-apps
  namespace: argocd
spec:
  description: Project for chinmayto applications
  sourceRepos:
    - 'https://github.com/chinmayto/terraform-aws-eks-argocd.git'
  destinations:
    - namespace: '*'
      server: https://kubernetes.default.svc
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  namespaceResourceWhitelist:
    - group: '*'
      kind: '*'
Enter fullscreen mode Exit fullscreen mode

Step 6: Implement App-of-Apps Pattern

Create a root application that manages other applications:

# argocd/app-of-apps.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/chinmayto/terraform-aws-eks-argocd.git
    targetRevision: HEAD
    path: argocd
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
Enter fullscreen mode Exit fullscreen mode

Create individual application definitions:

# argocd/nodejs-app-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nodejs-app
  namespace: argocd
spec:
  project: chinmayto-apps
  source:
    repoURL: https://github.com/chinmayto/terraform-aws-eks-argocd.git
    targetRevision: HEAD
    path: k8s-manifests
  destination:
    server: https://kubernetes.default.svc
    namespace: simple-nodejs-app
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
Enter fullscreen mode Exit fullscreen mode

Deployment Instructions

1. Deploy Infrastructure

cd infrastructure
terraform init
terraform plan
terraform apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

2. Verify Deployment

# Check EKS cluster
kubectl get nodes

# Check ArgoCD pods
kubectl get pods -n argocd

# Check NGINX Ingress
kubectl get svc -n ingress-nginx

# Get ArgoCD admin password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
Enter fullscreen mode Exit fullscreen mode

3. Access ArgoCD UI

Navigate to https://argocd.yourdomain.com and login with:

  • Username: admin
  • Password: (from step 4 above)

4. Deploy Applications

Apply the ArgoCD application manifests:

kubectl apply -f argocd/project.yaml
kubectl apply -f argocd/app-of-apps.yaml
kubectl apply -f argocd/nodejs-app-application.yaml
Enter fullscreen mode Exit fullscreen mode

Cleanup Steps

To avoid unnecessary AWS charges, clean up resources in the following order:

# 1. Delete ArgoCD applications first
kubectl delete applications --all -n argocd

# 2. Delete ArgoCD projects
kubectl delete appprojects --all -n argocd

# 3. Destroy Terraform infrastructure
terraform destroy -auto-approve

Enter fullscreen mode Exit fullscreen mode

Conclusion

This implementation demonstrates a production-ready GitOps workflow using ArgoCD on Amazon EKS. By following this guide, you've created:

  • A secure, scalable Kubernetes cluster on AWS
  • Automated application deployment pipeline
  • Centralized application management through ArgoCD
  • DNS-based access to your GitOps platform

The GitOps approach with ArgoCD provides numerous benefits including improved deployment reliability, enhanced security through Git-based workflows, and simplified application lifecycle management. This foundation can be extended to support multiple environments, advanced deployment strategies, and comprehensive monitoring solutions.

References and Further Reading

Top comments (0)