DEV Community

Cover image for Terraform stories.
Quovadis
Quovadis

Posted on

Terraform stories.

SAP Kyma with dynamic OIDC credentials with HCP Terraform

HCP Terraform already supports dynamic credentials with Kubernetes providers with AWS and GCP platforms.

I have extended this support to SAP BTP, Kyma runtime clusters with SAP Business Technology Platform.

Let's see how...


1. Configure Kubernetes

Configure HCP Terraform OIDC identity provider with SAP Kyma cluster.

SAP Kyma supports the gardener's oidc-shoot-extension, thus, effectively allowing for an arbitrary number of OIDC providers in a single shoot cluster.

The following has to be done upfront during the kyma cluster bootstrapping phase.

OpenIDConnect_HCP
locals {
  OpenIDConnect_HCP = jsonencode({
        "apiVersion": "authentication.gardener.cloud/v1alpha1",
        "kind": "OpenIDConnect",
        "metadata": {
            "name": "terraform-cloud"
        },
        "spec": {
            "issuerURL": "https://app.terraform.io",
            "clientID": "terraform-cloud",
            "usernameClaim": "sub",
            "usernamePrefix": "-",
            "groupsClaim": "terraform_organization_name",
            "groupsPrefix": ""
        }
  })
}
Enter fullscreen mode Exit fullscreen mode
resource "terraform_data" "bootstrap-tfc-oidc" {
  triggers_replace = {
    always_run = "${timestamp()}"
  }

  # the input becomes a definition of an OpenIDConnect provider as a non-sensitive json encoded string 
  #
  input = [ 
      nonsensitive(local.OpenIDConnect_HCP) 
      ]

 provisioner "local-exec" {
   interpreter = ["/bin/bash", "-c"]
   command = <<EOF
     (
    KUBECONFIG=kubeconfig-headless.yaml
    NAMESPACE=quovadis-btp
    set -e -o pipefail ;\
    curl -LO https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl
    chmod +x kubectl

    while ! ./kubectl get crd openidconnects.authentication.gardener.cloud --kubeconfig $KUBECONFIG; 
    do 
      echo "Waiting for OpenIDConnect CRD..."; sleep 1; 
    done
    ./kubectl wait --for condition=established crd openidconnects.authentication.gardener.cloud --timeout=480s --kubeconfig $KUBECONFIG
    crd=$(./kubectl get crd openidconnects.authentication.gardener.cloud --kubeconfig $KUBECONFIG -ojsonpath='{.metadata.name}' --ignore-not-found)
    if [ "$crd" = "openidconnects.authentication.gardener.cloud" ]
    then
      OpenIDConnect='${self.input[0]}'
      echo $(jq -r '.' <<< $OpenIDConnect)
      echo $OpenIDConnect

      echo | ./kubectl get nodes --kubeconfig $KUBECONFIG
      ./kubectl create ns $NAMESPACE --kubeconfig $KUBECONFIG --dry-run=client -o yaml | ./kubectl apply --kubeconfig $KUBECONFIG -f -
      ./kubectl label namespace $NAMESPACE istio-injection=enabled --kubeconfig $KUBECONFIG

      echo $OpenIDConnect | ./kubectl apply --kubeconfig $KUBECONFIG -n $NAMESPACE -f - 

    else
      echo $crd
    fi

     )
   EOF
 }
}
Enter fullscreen mode Exit fullscreen mode

As a result, the following OpenIDConnect CR will become available in your kyma cluster.

The OIDC identity resolves the authentication requests to the Kubernetes API. However, it must be first authorised to interact with the cluster API.

In order to do so, one must create custom cluster roles to the terraform OIDC identity in the kyma cluster with either "User" and/or "Group" subjects.

For OIDC identities coming from TFC (HCP Terraform), the role binding "User" value is formatted as follows:

organization:<MY-ORG-NAME>:project:<MY-PROJECT-NAME>:workspace:<MY-WORKSPACE-NAME>:run_phase:<plan|apply>.
Enter fullscreen mode Exit fullscreen mode

I have opted for generating these RBAC identities in the initial kyma cluster terraform configuration, thus, adding both plan and apply phase identities to the initial kyma runtime environment configuration as administrators.

User identities
/ https://developer.hashicorp.com/terraform/cloud-docs/run/run-environment#environment-variables
//
variable "TFC_WORKSPACE_NAME" {
  // HCP Terraform automatically injects the following environment variables for each run. 
  description = "The name of the workspace used in this run."
  type        = string
}

variable "TFC_PROJECT_NAME" {
  // HCP Terraform automatically injects the following environment variables for each run. 
  description = "The name of the project used in this run."
  type        = string
}

variable "TFC_WORKSPACE_SLUG" {
  // HCP Terraform automatically injects the following environment variables for each run. 
  description = "The slug consists of the organization name and workspace name, joined with a slash."
  type        = string
}

// organization:<MY-ORG-NAME>:project:<MY-PROJECT-NAME>:workspace:<MY-WORKSPACE-NAME>:run_phase:<plan|apply>.
locals {
  organization_name = split("/", var.TFC_WORKSPACE_SLUG)[0]
  user_plan = "organization:${local.organization_name}:project:${var.TFC_PROJECT_NAME}:workspace:${var.TFC_WORKSPACE_NAME}:run_phase:plan"
  user_apply = "organization:${local.organization_name}:project:${var.TFC_PROJECT_NAME}:workspace:${var.TFC_WORKSPACE_NAME}:run_phase:apply"
}
Enter fullscreen mode Exit fullscreen mode

This way, as soon as the kyma runtime environment has been provisioned, the required identities are in place in the kyma cluster.

After the kyma cluster has been bootstrapped with the HCP Terraform’s OIDC provider in place, one can bind RBAC roles to groups.

Group identity
resource "kubernetes_cluster_role_binding_v1" "oidc_role" {
  //depends_on = [ <list of dependencies> ] 

  metadata {
    name = "terraform-identity-admin"
  }
  //
  // Groups are extracted from the token claim designated by 'rbac_group_oidc_claim'
  //
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    api_group = "rbac.authorization.k8s.io"
    kind      = "Group"
    name      = var.tfc_organization_name
    namespace = ""
  }  
}
Enter fullscreen mode Exit fullscreen mode

Role bindings

2. Configure HCP Terraform

Required Environment Variables

HCP Terraform will require these two environment variables to enable kubernetes dynamic credentials

Variable Value Notes
TFC_KUBERNETES_PROVIDER_AUTH TFC_KUBERNETES_PROVIDER_AUTH[_TAG] true Must be present and set to true, or HCP Terraform will not attempt to authenticate to Kubernetes.
TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE TFC_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE[_TAG] TFC_DEFAULT_KUBERNETES_WORKLOAD_IDENTITY_AUDIENCE The audience name in your cluster's OIDC configuration, such as kubernetes.

You can set these as workspace variables, or if you’d like to share one Kubernetes role across multiple workspaces, you can use a variable set.

3. Configure the provider

HCP Terraform will assign the tfc_kubernetes_dynamic_credentials variable the kubeconfig token valid for 90 minutes.

tfc_kubernetes_dynamic_credentials
 variable "tfc_kubernetes_dynamic_credentials" {
  description = "Object containing Kubernetes dynamic credentials configuration"
  type = object({
    default = object({
      token_path = string
    })
    aliases = map(object({
      token_path = string
    }))
  })
}

output "kube_token" {
  sensitive = true
  value = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
}
Enter fullscreen mode Exit fullscreen mode

provider configuration
terraform {
/**/ 
  cloud {
    organization = "<organization>"


    workspaces {
      project = "terraform-stories"
      tags = ["runtime-context"]      
    }
  }
/**/ 
  required_providers {
    btp = {
      source  = "SAP/btp"
    }    
    # https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs
    kubernetes = {
      source  = "hashicorp/kubernetes"
    }
    # https://registry.terraform.io/providers/alekc/kubectl/latest/docs
    kubectl = {
      source  = "alekc/kubectl"
      //version = "~> 2.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
provider "kubernetes" {
 host                   = var.cluster-endpoint-url
 cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)
 token                  = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
}

provider "kubectl" {
 host                   = var.cluster-endpoint-url
 cluster_ca_certificate = base64decode(var.cluster-endpoint-ca)
 token                  = file(var.tfc_kubernetes_dynamic_credentials.default.token_path)
 load_config_file       = false

}
Enter fullscreen mode Exit fullscreen mode

One can retrieve both host and cluster_ca_certificate from the kyma cluster kubeconfig as follows:

kyma cluster kubeconfig
locals {
  labels = btp_subaccount_environment_instance.kyma.labels
}

data "http" "kubeconfig" {

  depends_on = [btp_subaccount_environment_instance.kyma]

  url = jsondecode(local.labels)["KubeconfigURL"]

  lifecycle {
    postcondition {
      condition     = can(regex("kind: Config",self.response_body))
      error_message = "Invalid content of downloaded kubeconfig"
    }
    postcondition {
      condition     = contains([200], self.status_code)
      error_message = self.response_body
    }
  } 

}

# yaml formatted default (oid-based) kyma kubeconfig
locals {
  kubeconfig = data.http.kubeconfig.response_body

  cluster_ca_certificate = base64decode(local.kubeconfig.clusters[0].cluster.certificate-authority-data)
 host                   = local.kubeconfig.clusters[0].cluster.server
}
Enter fullscreen mode Exit fullscreen mode

4. Retrieve kyma cluster configuration

Examples

kyma cluster shoot_info
data "kubernetes_config_map_v1" "shoot_info" {
  metadata {
    name = "shoot-info"
    namespace = "kube-system"
  }
}

output "shoot_info" {
  value =  jsondecode(jsonencode(data.kubernetes_config_map_v1.shoot_info.data))
}
Enter fullscreen mode Exit fullscreen mode
shoot_info = {
        domain            = "<shootName>.kyma.ondemand.com"
        extensions        = "shoot-auditlog-service,shoot-cert-service,shoot-dns-service,shoot-lakom-service,shoot-networking-filter,shoot-networking-problemdetector,shoot-oidc-service"
        kubernetesVersion = "1.30.6"
        maintenanceBegin  = "200000+0000"
        maintenanceEnd    = "000000+0000"
        nodeNetwork       = "10.250.0.0/16"
        nodeNetworks      = "10.250.0.0/16"
        podNetwork        = "100.64.0.0/12"
        podNetworks       = "100.64.0.0/12"
        projectName       = "kyma"
        provider          = "azure"
        region            = "westeurope"
        serviceNetwork    = "100.104.0.0/13"
        serviceNetworks   = "100.104.0.0/13"
        shootName         = "<shootName>"
    }
Enter fullscreen mode Exit fullscreen mode

kyma cluster availability zones
data "kubernetes_nodes" "k8s_nodes" {}

locals {
  k8s_nodes = { for node in data.kubernetes_nodes.k8s_nodes.nodes : node.metadata.0.name => node }
}

data "jq_query" "k8s_nodes" {

  data =  jsonencode(local.k8s_nodes)
  query = "[ .[].metadata[] | { NAME: .name, ZONE: .labels.\"topology.kubernetes.io/zone\", REGION: .labels.\"topology.kubernetes.io/region\" } ]"
}

output "k8s_zones" { 
  value = jsondecode(data.jq_query.k8s_nodes.result)
}
Enter fullscreen mode Exit fullscreen mode
k8s_zones = [
        {
            NAME   = "shoot--kyma--<shootName>-cpu-worker-0-z1-5759f-j6tsf"
            REGION = "westeurope"
            ZONE   = "westeurope-1"
        },
        {
            NAME   = "shoot--kyma--<shootName>-cpu-worker-0-z2-76d84-br7v6"
            REGION = "westeurope"
            ZONE   = "westeurope-2"
        },
        {
            NAME   = "shoot--kyma--<shootName>-cpu-worker-0-z3-5b77f-scbpv"
            REGION = "westeurope"
            ZONE   = "westeurope-3"
        },
    ]
Enter fullscreen mode Exit fullscreen mode

kyma cluster list of modules
data "kubernetes_resource" "KymaModules" {
  api_version    = "operator.kyma-project.io/v1beta2"
  kind           = "Kyma"

  metadata {
    name      = "default"
    namespace = "kyma-system"
  }  
} 

locals {
  KymaModules = data.kubernetes_resource.KymaModules.object.status.modules
}

data "jq_query" "KymaModules" {
  depends_on = [
        data.kubernetes_resource.KymaModules
  ] 
  data =  jsonencode(local.KymaModules)
  query = "[ .[] | { channel, name, version, state, api: .resource.apiVersion, fqdn } ]"
}


output "KymaModules" {
  value =  jsondecode(data.jq_query.KymaModules.result)
}

Enter fullscreen mode Exit fullscreen mode
KymaModules = [
        {
            api     = "operator.kyma-project.io/v1alpha1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/btp-operator"
            name    = "btp-operator"
            state   = "Ready"
            version = "1.1.18"
        },
        {
            api     = "operator.kyma-project.io/v1alpha1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/serverless"
            name    = "serverless"
            state   = "Ready"
            version = "1.5.1"
        },
        {
            api     = "connectivityproxy.sap.com/v1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/connectivity-proxy"
            name    = "connectivity-proxy"
            state   = "Ready"
            version = "1.0.4"
        },
        {
            api     = "operator.kyma-project.io/v1alpha1"
            channel = "regular"
            fqdn    = "kyma-project.io/module/api-gateway"
            name    = "api-gateway"
            state   = "Ready"
            version = "2.10.1"
        },
        {
            api     = "operator.kyma-project.io/v1alpha2"
            channel = "regular"
            fqdn    = "kyma-project.io/module/istio"
            name    = "istio"
            state   = "Ready"
            version = "1.11.1"
        },
    ]
Enter fullscreen mode Exit fullscreen mode

Top comments (0)