DEV Community

Tristan Habert
Tristan Habert

Posted on

ArgoCD on GKE Autopilot

In this article, I want to share with you my recent experience with GKE to deploy a CD tool like ArgoCD. I will use Terraform to deploy the GKE cluster and deploy my Helm chart within the newly created cluster.

Here's what it will look like:

cluster-schema

Prerequisites

  • Terraform v1.0.0+
  • gcloud cli authenticated
  • gcloud plugin gke-gcloud-auth-plugin
  • kubectl binary
  • Service account with the required permissions (following Least Privilege principle) and its access key.

Setup

Firstly, we need to do is to add the Terraform providers for Helm and for our GKE cluster. Create the file providers.tf. Add these lines to specify where Kubernetes and Helm resources must apply:

# providers.tf
provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.my_cluster.endpoint}"
  token                  = data.google_client_config.default.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
}

provider "helm" {
  kubernetes = {
    host                   = "https://${data.google_container_cluster.my_cluster.endpoint}"
    token                  = data.google_client_config.default.access_token
    cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
  }
}
Enter fullscreen mode Exit fullscreen mode

As you can see we use variables referenced as data sources. So let's create the file data.tf. Then add the data sources.

# data.tf
data "google_container_cluster" "my_cluster" {
  name     = "autopilot-cluster-1"
  location = var.region
  project = var.project_id
}

data "google_client_config" "default" {}
Enter fullscreen mode Exit fullscreen mode

The client config fetch a short-live token in order to access your cluster.

Then we create the variables.tf.

# variables.tf
variable "project_id" {
  type = string
}

variable "region" {
  type = string
}
Enter fullscreen mode Exit fullscreen mode

Terraform state

Terraform is a stateful language. It means that it stores the state of each services deployed. Every time you run terraform plan/apply it will check the state file to determine which resource needs to be deployed or edited. We have multiple ways to store it. In my case, I chose to store it on Gitlab. It is easy to setup you just need your gitlab personal token with scope set to api. Create a backend.tf file.

# backend.tf
terraform {
  backend "http" {
  }
}
Enter fullscreen mode Exit fullscreen mode

When you run your terraform init command, you will need to specify the backend config. Gitlab gives you the template command :

export GITLAB_ACCESS_TOKEN=<YOUR-ACCESS-TOKEN>
export TF_STATE_NAME=argo-state
terraform init \
    -backend-config="address=https://gitlab.example.com/api/v4/projects/<id>/terraform/state/$TF_STATE_NAME" \
    -backend-config="lock_address=https://gitlab.example.com/api/v4/projects/<id>/terraform/state/$TF_STATE_NAME/lock" \
    -backend-config="unlock_address=https://gitlab.example.com/api/v4/projects/<id>/terraform/state/$TF_STATE_NAME/lock" \
    -backend-config="username=<your-user>" \
    -backend-config="password=$GITLAB_ACCESS_TOKEN" \
    -backend-config="lock_method=POST" \
    -backend-config="unlock_method=DELETE" \
    -backend-config="retry_wait_min=5"
Enter fullscreen mode Exit fullscreen mode

Part 1: Helm chart

In the first part, we will focus on ArgoCD deployment. I am going to use the official chart. Let's create our first file main.tf. I will put all the Kubernetes resources in it. Declare the following resource:

# main.tf
resource "helm_release" "argocd" {
  name             = "argocd"
  repository       = "https://argoproj.github.io/argo-helm"
  chart            = "argo-cd"
  namespace        = kubernetes_namespace_v1.argocd_ns.metadata[0].name
  create_namespace = false
  version          = "9.4.6"

  values = [ file("values/argocd-values.yaml")]
  depends_on = [ kubernetes_manifest.argocd_backend_config ]
}
Enter fullscreen mode Exit fullscreen mode

You can either set 'create_namespace = true' or create the namespace using a kubernetes resource like this:

resource "kubernetes_namespace_v1" "argocd_ns" {
  metadata {
    name = "argocd"
    labels = {
      "name"       = "argocd"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, let's create the values file. It contains the basic setup of the deployment. Inside you can configure resource limitation that can be useful considering the billing method in use by GKE. Also consider using Spot VMs to allows us to save up to 60-90% on infrastructure costs.
Here is the values file I use:

# argocd-values.yaml
global:
  domain: argo.example.com
  nodeSelector:
    cloud.google.com/gke-spot: "true"
  tolerations:
    - key: "cloud.google.com/gke-spot"
      operator: "Equal"
      value: "true"
      effect: "NoSchedule"
server:
  # Run multiple replicas for high availability
  replicas: 1
  service:
    type: NodePort
    annotations:
      cloud.google.com/backend-config: '{"default": "argocd-backend-config"}'

  # Resource limits prevent runaway memory consumption
  resources:
    limits:
      cpu: 500m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 256Mi

  # Enable metrics for Prometheus scraping
  metrics:
    enabled: true
    serviceMonitor:
      enabled: true

controller:
  # Controller handles sync operations - scale for large deployments
  replicas: 1
  resources:
    limits:
      cpu: 1000m
      memory: 1Gi
    requests:
      cpu: 250m
      memory: 512Mi

repoServer:
  # Repo server renders Helm templates - critical for performance
  replicas: 1
  resources:
    limits:
      cpu: 1000m
      memory: 1Gi
    requests:
      cpu: 250m
      memory: 512Mi

configs:
  params:
    # Disable TLS for internal communication (use ingress for external TLS)
    server.insecure: true
Enter fullscreen mode Exit fullscreen mode

Part 2: Networking

Now that we have our ArgoCD ready, we need to expose the argo-server service to ensure its access from outside. For that we will create an ingress that trigger the creation of an external application load balancer (L7) by GKE. We need to reserve a static external IP address for the ALB. For our ArgoCD to be secured, we let GCP generate a SSL certificate for our domain name using ManagedCertificate.

Let's start with the ingress config:

# main.tf
resource "kubernetes_ingress_v1" "argo_ingress" {
  metadata {
    name      = "argocd-ingress"
    namespace = kubernetes_namespace_v1.argocd_ns.metadata[0].name
    annotations = {
      "kubernetes.io/ingress.class"                 = "gce"
      "kubernetes.io/ingress.global-static-ip-name" = "argocd-static-ip"
      "networking.gke.io/managed-certificates"      = "argo-cert"
      "cloud.google.com/backend-config"             = jsonencode({ "default" = "argocd-backend-config" })
    }
  }

  spec {
    rule {
      host = "argo.example.com"
      http {
        path {
          path      = "/"
          path_type = "Prefix"
          backend {
            service {
              name = "argocd-server" 
              port {
                number = 80
              }
            }
          }
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

I created a new file for the load balancer resources named lb-gke.tf. Inside, you reserve the IP address for the ALB. Then you create the backend config that is required for the ALB to perform health checks on your service, as well as the frontend configuration that automatically redirects HTTP requests to HTTPS. Lastly, you use the cert manager to provide an SSL certificate.

# lb-gke.tf
resource "google_compute_global_address" "argocd_static_ip" {
  name = "argocd-static-ip"
  project = var.project_id
}

resource "kubernetes_manifest" "argocd_backend_config" {
  manifest = {
    apiVersion = "cloud.google.com/v1"
    kind       = "BackendConfig"
    metadata = {
      name      = "argocd-backend-config"
      namespace = "argocd"
    }
    spec = {
      healthCheck = {
        checkIntervalSec = 30
        timeoutSec       = 5
        healthyThreshold = 1
        unhealthyThreshold = 2
        type             = "HTTP"
        requestPath      = "/healthz"
        port             = 8080
      }
    }
  }
  depends_on = [ kubernetes_namespace_v1.argocd_ns ]
}

resource "kubernetes_manifest" "argocd_frontend_config" {
  manifest = {
    apiVersion = "networking.gke.io/v1beta1"
    kind = "FrontendConfig"
    metadata = {
      name = "argocd-frontend-config"
      namespace = "argocd"
    }
    spec = {
      redirectToHttps = {
        enabled = true
      }
    }
  }
}

resource "kubernetes_manifest" "argocd_managed_cert" {
  manifest = {
    apiVersion = "networking.gke.io/v1"
    kind       = "ManagedCertificate"
    metadata = {
      name      = "argo-cert"
      namespace = kubernetes_namespace_v1.argocd_ns.metadata[0].name
    }
    spec = {
      domains = ["argo.example.com"]
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Part 3: Deploy your ArgoCD

We are done with the config. Now it is time to deploy !

Create the file outputs.tf and add these lines :

output "argocd_lb_ip" {
  value = google_compute_global_address.argocd_static_ip.address
}
Enter fullscreen mode Exit fullscreen mode

Once Terraform has finished applying, it will output the reserved IP address for your ALB.

Then, run your init command and the terraform plan command to ensure it deploys the right resources. Finally, run the terraform apply command.

There you go! You have your ArgoCD deployed.

Top comments (0)