How many service account keys are stored per day as variables in the Gitlab CI configuration?
Google Service Accounts Key is saved in Gitlab, we face all the security issues of storing credentials outside of the cloud infrastructure: Key rotation, age, destruction, location, etc.
There are 2 common reasons for developers to store GCP credentials in Gitlab CI:
- They use
- They use
specific runnersdeployed in a
Google Kubernetes Enginecluster but do not use (or do not know about) Workload Identity add-ons.
You can continue to use GSA keys in Gitlab CI and secure the keys with external tools like Vault and Forseti, but this will add additional tools to manage.
The alternative that Google Cloud proposes for customers is to enable Workload Identity add-ons.
The workload identity add-on provided in Google Kubernetes Engine allows you to bind the
Kubernetes Service Account associated with the specific runner to the
Google service account.
Note: At the time of writing this post, when you enable
Workload identityyou will not be able to use some GKE add-ons like
default nodepoolbecause they depend on
Compute Engine metadata serverand Workload identity uses
GKE metadata server.
For this reason, I often recommend having a dedicated GKE cluster for Gitlab runners to avoid any errors for your business workload.
The first step is to create and configure our GKE devops cluster.
- We start by creating our GKE cluster :
gcloud projects create mycompany-core-devops gcloud config set project mycompany-core-devops gcloud services enable containerregistry.googleapis.com gcloud container clusters create devops \ --workload-pool=mycompany-core-devops.svc.id.goog
Let's create a nodepool for the runner jobs:
gcloud container node-pools create gitlab-runner-jobs-dev \ --cluster=devops \ --node-taints=gitlab-runner-jobs-dev-reserved-pool=true:NoSchedule \ --node-labels=nodepool=dev \ --min-nodes=0 --max-nodes=3
kubectlto communicate with the cluster:
gcloud container clusters get-credentials devops
- Create the namespace to use for the Kubernetes service account.
kubectl create namespace dev
- Create the Kubernetes service account to use for specific runner:
kubectl create serviceaccount --namespace dev app-deployer
- Create a Google service account for the specific runner
gcloud projects create mycompany-core-security gcloud config set project mycompany-core-security gcloud iam service-accounts create app-dev-deployer
Note: For easier visibility and auditing, I recommend to centrally create service accounts in dedicated projects.
- Allow the Kubernetes service account to impersonate the Google service account by creating an IAM policy binding between the two. This binding allows the Kubernetes Service account to act as the Google service account.
gcloud iam service-accounts add-iam-policy-binding \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:mycompany-core-devops.svc.id.goog[dev/app-deployer]" \ email@example.com
- Add the
firstname.lastname@example.org to the Kubernetes service account, using the email address of the Google service account.
kubectl annotate serviceaccount \ --namespace dev \ app-deployer \ email@example.com
The next step is to assign the KSA to our Gitlab runner.
- Start by installing Helm:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
- Add Gitlab Helm package:
helm repo add gitlab https://charts.gitlab.io
- Configure the runner:
Create the file
imagePullPolicy: IfNotPresent gitlabUrl: https://gitlab.com/ runnerRegistrationToken: "<>" unregisterRunners: true terminationGracePeriodSeconds: 3600 concurrent: 10 checkInterval: 30 rbac: create: true metrics: enabled: true runners: image: ubuntu:18.04 locked: true pollTimeout: 360 protected: true serviceAccountName: app-deployer privileged: false namespace: dev builds: cpuRequests: 100m memoryRequests: 128Mi services: cpuRequests: 100m memoryRequests: 128Mi helpers: cpuRequests: 100m memoryRequests: 128Mi tags: "k8s-dev-runner" nodeSelector: nodepool: dev nodeTolerations: - key: "gitlab-runner-jobs-dev-reserved" operator: "Equal" value: "true" effect: "NoSchedule"
You can find the description of each attribute in the Gitlab runner charts repository 
Get the Gitlab registration token in
Project -> Settings -> CI/CD -> Runnersin the
Setup a specific Runner manuallysection.
Install the runner:
helm install -n dev app-dev-runner -f values.yaml gitlab/gitlab-runner
Before running our first pipeline in Gitlab CI, let's create a new business project and add the Kubernetes cluster administrator permission to the GSA we created earlier.
gcloud projects create mycompany-business-dev gcloud config set project mycompany-business-dev gcloud projects add-iam-policy-binding mycompany-business-dev \ --role roles/container.clusterAdmin \ --member "serviceAccount:firstname.lastname@example.org"
Now we can run our pipeline
stages: - dev infra: stage: dev image: name: google/cloud-sdk script: - gcloud config set project mycompany-business-dev - gcloud services enable containerregistry.googleapis.com - gcloud container clusters create business tags: - k8s-dev-runner
The job will create a GKE cluster in the
mycompany-business-dev project. We can follow the same steps for a
We can go further and only allow our GSA to create Kubernetes manifests in a specific namespace of our business cluster.
Create a file
kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: app name: devops-app rules: - apiGroups: [""] resources: ["pods", "pods/exec", "secrets"] verbs: ["get", "list", "watch", "create", "patch", "delete"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: devops-app-binding namespace: app subjects: - kind: User name: email@example.com roleRef: kind: Role name: devops-app apiGroup: rbac.authorization.k8s.io
Create the rbac
gcloud config set project mycompany-business-dev gcloud container clusters get-credentials business kubectl create namespace app kubectl apply -f rbac-dev.yaml
And don't forget to assign permissions to create Kubernetes resources:
gcloud projects add-iam-policy-binding mycompany-business-dev \ --role roles/container.developer \ --member "serviceAccount:firstname.lastname@example.org"
Let's create a new pod in the business cluster:
manifests: stage: dev image: name: google/cloud-sdk script: - gcloud config set project mycompany-business-dev - gcloud container clusters get-credentials business - kubectl run nginx --image=nginx -n app tags: - k8s-dev-runner
If you try to create the nginx pod in the default namespace, it will fail with an unauthorized access error.
In this post, we created a devops cluster, we centralized our GSA in a specific GCP project, and we ended up deploying our GCP and Kubernetes resources in an business project.
This mechanism guarantees end-to-end security for your GSA resources. You can easily create a cron job that disables the GSA in the evening and re-enable them in the morning of a working day.
If you have any questions or feedback, please feel free to leave a comment.
Otherwise, I hope I've convinced you to remove your GSA keys from Gitlab CI variables and use specific runners in a GKE with Workload Identity enabled.
By the way, do not hesitate to share with peers 😊
Thanks for reading!