How many service account keys are stored per day as variables in the Gitlab CI configuration?
When a Google Service Accounts Key
is saved in Gitlab, we face all the security issues of storing credentials outside of the cloud infrastructure: Access, authorization, key rotation, age, destruction, location, etc.
There are 2 common reasons for developers to store GCP credentials in Gitlab CI:
- They use
shared runners
. - They use
specific runners
deployed in aGoogle Kubernetes Engine
cluster but do not use (or do not know about) Workload Identity add-ons.
You can continue to use GSA keys in Gitlab CI and secure the keys with external tools like Vault and Forseti, but this will add additional tools to manage.
The alternative that Google Cloud proposes for customers is to enable Workload Identity add-ons.
The workload identity add-on provided in Google Kubernetes Engine allows you to bind the Kubernetes Service Account
associated with the specific runner to the Google service account
.
Note: At the time of writing this post, when you enable
Workload identity
you will not be able to use some GKE add-ons likeIstio
,Config Connector
orApplication Manager
ondefault nodepool
because they depend onCompute Engine metadata server
and Workload identity usesGKE metadata server
.
For this reason, I often recommend having a dedicated GKE cluster for Gitlab runners to avoid any errors for your business workload.
Working with Workload Identity
The first step is to create and configure our GKE devops cluster.
- We start by creating our GKE cluster [1]:
gcloud projects create mycompany-core-devops
gcloud config set project mycompany-core-devops
gcloud services enable containerregistry.googleapis.com
gcloud container clusters create devops \
--workload-pool=mycompany-core-devops.svc.id.goog
Let's create a nodepool for the runner jobs:
gcloud container node-pools create gitlab-runner-jobs-dev \
--cluster=devops \
--node-taints=gitlab-runner-jobs-dev-reserved=true:NoSchedule \
--node-labels=nodepool=dev \
--min-nodes=0 --max-nodes=3
- Configure
kubectl
to communicate with the cluster:
gcloud container clusters get-credentials devops
- Create the namespace to use for the Kubernetes service account.
kubectl create namespace dev
- Create the Kubernetes service account to use for specific runner:
kubectl create serviceaccount --namespace dev app-deployer
- Create a Google service account for the specific runner
gcloud projects create mycompany-core-security
gcloud config set project mycompany-core-security
gcloud iam service-accounts create app-dev-deployer
Note: For easier visibility and auditing, I recommend to centrally create service accounts in dedicated projects.
- Allow the Kubernetes service account to impersonate the Google service account by creating an IAM policy binding between the two. This binding allows the Kubernetes Service account to act as the Google service account.
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:mycompany-core-devops.svc.id.goog[dev/app-deployer]" \
app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com
- Add the
iam.gke.io/gcp-service-account=app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com
annotation to the Kubernetes service account, using the email address of the Google service account.
kubectl annotate serviceaccount \
--namespace dev \
app-deployer \
iam.gke.io/gcp-service-account=app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com
Assign KSA to Gitlab runner
The next step is to assign the KSA to our Gitlab runner.
- Start by installing Helm:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
- Add Gitlab Helm package:
helm repo add gitlab https://charts.gitlab.io
- Configure the runner:
Create the file values.yaml
:
imagePullPolicy: IfNotPresent
gitlabUrl: https://gitlab.com/
runnerRegistrationToken: "<>"
unregisterRunners: true
terminationGracePeriodSeconds: 3600
concurrent: 10
checkInterval: 30
rbac:
create: true
metrics:
enabled: true
runners:
image: ubuntu:18.04
locked: true
pollTimeout: 360
protected: true
serviceAccountName: app-deployer
privileged: false
namespace: dev
builds:
cpuRequests: 100m
memoryRequests: 128Mi
services:
cpuRequests: 100m
memoryRequests: 128Mi
helpers:
cpuRequests: 100m
memoryRequests: 128Mi
tags: "k8s-dev-runner"
nodeSelector:
nodepool: dev
nodeTolerations:
- key: "gitlab-runner-jobs-dev-reserved"
operator: "Equal"
value: "true"
effect: "NoSchedule"
You can find the description of each attribute in the Gitlab runner charts repository [2]
Get the Gitlab registration token from
Project -> Settings -> CI/CD -> Runners
in theSetup a specific Runner manually
section.Install the runner:
helm install -n dev app-dev-runner -f values.yaml gitlab/gitlab-runner
Using the specific runner in Gitlab CI
Before running our first pipeline in Gitlab CI, let's create a new business project and add the Kubernetes cluster administrator permission to the GSA we created earlier.
gcloud projects create mycompany-business-dev
gcloud config set project mycompany-business-dev
gcloud projects add-iam-policy-binding mycompany-business-dev \
--role roles/container.clusterAdmin \
--member "serviceAccount:app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com"
Now we can run our pipeline .gitlab-ci.yml
:
stages:
- dev
infra:
stage: dev
image:
name: google/cloud-sdk
script:
- gcloud config set project mycompany-business-dev
- gcloud services enable containerregistry.googleapis.com
- gcloud container clusters create business
tags:
- k8s-dev-runner
The job will create a GKE cluster in the mycompany-business-dev
project. We can follow the same steps for a prod
environment.
Go further
We can go further and only allow our GSA to create Kubernetes manifests in a specific namespace of our business cluster.
Create a file rbac-dev.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: app
name: devops-app
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec", "secrets"]
verbs: ["get", "list", "watch", "create", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: devops-app-binding
namespace: app
subjects:
- kind: User
name: app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com
roleRef:
kind: Role
name: devops-app
apiGroup: rbac.authorization.k8s.io
Create the rbac
gcloud config set project mycompany-business-dev
gcloud container clusters get-credentials business
kubectl create namespace app
kubectl apply -f rbac-dev.yaml
And don't forget to assign permissions to create Kubernetes resources:
gcloud projects add-iam-policy-binding mycompany-business-dev \
--role roles/container.developer \
--member "serviceAccount:app-dev-deployer@mycompany-core-security.iam.gserviceaccount.com"
Let's create a new pod in the business cluster:
manifests:
stage: dev
image:
name: google/cloud-sdk
script:
- gcloud config set project mycompany-business-dev
- gcloud container clusters get-credentials business
- kubectl run nginx --image=nginx -n app
tags:
- k8s-dev-runner
If you try to create the nginx pod in the default namespace, it will fail with an unauthorized access error.
Conclusion
In this post, we created a devops cluster, we centralized our GSA in a specific GCP project, and we ended up deploying our GCP and Kubernetes resources in a business project.
This mechanism guarantees end-to-end security for your GSA resources. You can easily create a cron job that disables the GSA in the evening and re-enable them in the morning of a working day.
If you have any questions or feedback, please feel free to leave a comment.
Otherwise, I hope I've convinced you to remove your GSA keys from Gitlab CI variables and use specific runners in a GKE with Workload Identity enabled.
By the way, do not hesitate to share with peers 😊
Thanks for reading!
Documentation
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#authenticating_to
[2] https://gitlab.com/gitlab-org/charts/gitlab-runner/-/blob/main/values.yaml
Top comments (2)
Hey! Great article!
I just implemented it the same way.
I have a few questions regarding ->"gcloud container clusters get-credentials business"
How long these credentials are valid?
Could they be stolen and used for a long period or are these short-lived tokens as GCP knows the call comes from an Cloud Identity Account?
Is this the only way to auth kubectl?
Thanks a lot!
Hi Tim!
Thanks for your contribution!
The credentials will live as long as the gitlab runner job is up so just after the completion of the stage.
For a Kubernetes cluster shared between different teams or departments, I would recommend using Kubernetes RBAC or Kubernetes Agents (Premium tiers). It could help to respect least privilege principles.