DEV Community

Patrice Gauthier
Patrice Gauthier

Posted on

Importing kubernetes manifests with terraform for cert-manager

How to import many Kubernetes manifests with Terraform.

This tutorial will focus on creating Kubernetes resources for cert-manager on Google Cloud though this could apply on AWS EKS.

Cert-manager needs CRDs (Custom Resource Definitions) to be defined in K8S as they do not exist originally and Kubernetes will reject them if you install the Cert-manager helm chart without it.

We need to do that in 2 steps but Terraform applies it all in one sweep. To solve this problem it can be achieved by using Terragrunt to manage Terraform folders each with their own state. With this we can apply the CRDs configuration first then apply the Cert-Manager helm chart.

Steps

The steps will be as follow:

  1. Create the kubernetes cluster
  2. Apply CRDs for Cert-manager
  3. Install the helm release of your application (it should include the ingress or at least have the ingress defined before installing cert-manager)
  4. Install the cert-manager helm release

Create the cluster

Define and create your K8S cluster to be managed by Terraform in a folder. (ex: 1-k8s-cluster/main.tf)

terraform {
  backend "gcs" {}
}

variable "cluster_location" {
  type = string
}
variable "cluster_name" {
  type = string
}
variable "environment" {
  type = string
}

data "google_project" "project" {}

resource "google_artifact_registry_repository" "docker_registry" {
  repository_id = "us.gcr.io"
  format        = "DOCKER"
}

resource "google_container_cluster" "k8s_cluster" {
  name                     = var.cluster_name
  location                 = var.cluster_location
  initial_node_count       = 1
  enable_shielded_nodes    = false
  deletion_protection      = true
  remove_default_node_pool = true
  networking_mode          = "VPC_NATIVE"
}

resource "google_container_node_pool" "pool-1" {
  name               = "pool-1"
  location           = var.cluster_location
  initial_node_count = 1
  cluster            = google_container_cluster.k8s_cluster.name

  node_config {
    machine_type = "e2-standard-2"
    image_type   = "COS_CONTAINERD"
  }

  management {
    auto_repair  = true
    auto_upgrade = true
  }

  network_config {
    create_pod_range     = false
    enable_private_nodes = false
  }

  depends_on = [google_container_cluster.k8s_cluster]
}

output "cluster_name" {
  value = google_container_cluster.k8s_cluster.name
}

output "cluster_endpoint" {
  value = google_container_cluster.k8s_cluster.endpoint
}

output "cluster_ca_certificate" {
  value = google_container_cluster.k8s_cluster.master_auth[0].cluster_ca_certificate
}
Enter fullscreen mode Exit fullscreen mode

Create a Terragrunt file to refer to the root file.

include "root" {
  path = find_in_parent_folders()
}
Enter fullscreen mode Exit fullscreen mode

Then define the root file in the parent folder

inputs = {
  envvars...
}

locals {
  project = "..."
}

generate "provider" {
  path      = "provider.tf"
  if_exists = "overwrite"
  contents  = <<EOF
provider "google" {
  project = "${local.project}"
  region  = "us-east1"
}
  EOF
}

remote_state {
  backend = "gcs"
  config = {
    project              = local.project
    bucket               = "unique-bucket-name"
    prefix               = "${path_relative_to_include()}"
    location             = "us-east1"
    skip_bucket_creation = true
  }
}
Enter fullscreen mode Exit fullscreen mode

The K8S CRDs folder

In one folder (ex: 2-K8S-Crds) you define those K8S manifests to apply before anything else.

in main.tf

terraform {
  required_providers {
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = "1.14.0"
    }
  }
}

# The reference to the current project or a AWS project
data "google_client_config" "provider" {}

# The reference to the current cluster or EKS
data "google_container_cluster" "my_cluster" {
  name     = var.cluster_name
  location = var.cluster_location
}

# We configure the kubectl provider to use those values for authenticating
provider "kubectl" {
  host                   = data.google_container_cluster.my_cluster.endpoint
  token                  = data.google_client_config.provider.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.my_cluster.master_auth[0].cluster_ca_certificate)
}

#Download the multiple manifests file.
data "http" "cert_manager_crds" {
  url = "https://github.com/cert-manager/cert-manager/releases/download/v${var.cert_manager_version}/cert-manager.crds.yaml"
}

data "kubectl_file_documents" "cert_manager_crds" {
  content = data.http.cert_manager_crds.response_body
  lifecycle {
    precondition {
      condition     = 200 == data.http.cert_manager_crds.status_code
      error_message = "Status code invalid"
    }
  }
}

# We use the for_each or else this kubectl_manifest will only import the first manifest in the file.
resource "kubectl_manifest" "cert_manager_crds" {
  for_each  = data.kubectl_file_documents.cert_manager_crds.manifests
  yaml_body = each.value
}
Enter fullscreen mode Exit fullscreen mode

You define a Terragrunt file to set the dependencies

include "root" {
  path = find_in_parent_folders()
}

dependency "cluster" {
  config_path = "../1-k8s-cluster"
}
Enter fullscreen mode Exit fullscreen mode

Deploy

Define the helm chart for your application. It may of may not have the ingress in it but the ingress should exist before installing the cert-manager. When cert-manager will be deployed, cert-manager will see the annotations on the ingress an do its thing (do a cert-request etc.)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  namespace: my-app
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/enable-cors: 'true'
    nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, PATCH, POST, DELETE, OPTIONS'
    cert-manager.io/cluster-issuer: 'letsencrypt-staging' 
    certmanager.k8s.io/acme-challenge-type: http01
spec:
  #tls:
  #  - hosts:
  #      - 'your-domain'
  #    secretName: app-backend-cert-tls
  rules:
    - host: 'your-domain'
      http:
        paths:
        - backend:
            serviceName: my-app
            servicePort: 8080
        paths:
        - path: /
          pathType: Prefix
          backend:
            service:
              name: my-app
              port:
                number: 8080
Enter fullscreen mode Exit fullscreen mode

The cert-manager folder

In my case this folder is within the same root folder containing the terraform folders but it doesn't have a terragrunt file. I want to install cert-manager post deployment of the application (this is for CI).

data "google_container_cluster" "cluster" {
  name     = local.cluster_name
  location = local.cluster_location
}

#needed for inlining kubernetes manifest as terraform code (kubernetes_manifest resource).
provider "kubernetes" {
  host                   = "https://${data.google_container_cluster.cluster.endpoint}"
  token                  = data.google_client_config.provider.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
}


provider "helm" {
  kubernetes {
    host                   = data.google_container_cluster.cluster.endpoint
    token                  = data.google_client_config.provider.access_token
    cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
  }
}

#in my case I use it to create the tls secret with only the keys defined 
provider "kubectl" {
  host                   = data.google_container_cluster.cluster.endpoint
  token                  = data.google_client_config.provider.access_token
  cluster_ca_certificate = base64decode(data.google_container_cluster.cluster.master_auth[0].cluster_ca_certificate)
  load_config_file       = false
}

resource "helm_release" "cert_manager" {
  name             = "cert-manager"
  namespace        = "cert-manager"
  repository       = "https://charts.jetstack.io"
  chart            = "cert-manager"
  version          = var.cert_manager_version
  create_namespace = true
}

resource "kubernetes_manifest" "staging_cluster_issuer" {
  manifest = {
    apiVersion = "cert-manager.io/v1"
    kind       = "ClusterIssuer"
    metadata = {
      name = "letsencrypt-staging"
    }

    spec = {
      acme = {
        email = local.cert_contact_email
        privateKeySecretRef = {
          name = "letsencrypt-staging"
        }
        server = "https://acme-staging-v02.api.letsencrypt.org/directory"
        solvers = [
          {
            http01 = {
              ingress = {
                class = "nginx"
              }
            }
          }
        ]
      }
    }
  }
  depends_on = [module.cert-manager]
}


//ignore fields so that when it gets changed by issuers, it doesn't trigger a diff
resource "kubectl_manifest" "app_backend_cert_tls" {
  apply_only    = true
  ignore_fields = ["data", "annotations"]
  yaml_body     = <<YAML
apiVersion: v1
kind: Secret
metadata:
  name: app-backend-cert-tls
  namespace: my-app
data:
  tls.crt: 
  tls.key: 
YAML
}



resource "kubernetes_manifest" "prod_cluster_issuer" {
  manifest = {
    apiVersion = "cert-manager.io/v1"
    kind       = "ClusterIssuer"
    metadata = {
      name = "letsencrypt-prod"
    }

    spec = {
      acme = {
        email = local.cert_contact_email
        privateKeySecretRef = {
          name = "letsencrypt-prod"
        }
        server = "https://acme-v02.api.letsencrypt.org/directory"
        solvers = [
          {
            http01 = {
              ingress = {
                class = "nginx"
              }
            }
          }
        ]
      }
    }
  }
  depends_on = [kubernetes_manifest.staging_cluster_issuer]
}
Enter fullscreen mode Exit fullscreen mode

Then in your ingress you comment out the tls section.
You should be able to connect to your application and if it's with the browser it will give a warning because the staging cert is self-signed. You accept it and it works!

To use the actual certificate now you just need to change the annotation to:
cert-manager.io/cluster-issuer: 'letsencrypt-prod'

Now with this you have a kubernetes cluster that runs cert-manager with the ease of changing its version.

Top comments (0)