Introduction
As organizations pursue greater scalability and operational efficiency, microservices have become a preferred architectural approach. This shift often leads to development teams being organized around individual microservices, with each team owning and maintaining its specific service. These microservices are typically deployed within a shared Kubernetes cluster.
However, this setup can introduce logistical challenges for cluster administrators. Team members often have varying levels of familiarity with Kubernetes concepts, and developer experience can differ significantly across teams. As a result, there is a growing need to isolate each team within its own partition of the cluster while still providing them with API access (for kubectl **or **k9s) to manage their workloads independently.
In this article, you’ll learn how to partition a Kubernetes cluster into separate tenants and provide tenant administrators and users with API
access to their specific environments. This will be achieved using Capsule for multi-tenancy, Keycloak for user management, and kubelogin for dynamic context creation.
Setting up a development environment
Before setting up the development environment, ensure that kubectl and helm are installed on your local machine.
To test our solution, we’ll need a local development environment that simulates a Kubernetes cluster. There are several options available, but one of the most popular and user-friendly tools is Minikube.
You can follow this guide to install Minikube:
Optional: It is recommended to use k9s to easily view, edit and delete our cluster resources without typing kubectl commands. You will find installation instructions here.
Installing Dependencies to our Minikube cluster
1. Installing Keycloak
We can now start by deploying Keycloak, which will serve as our identity provider for managing users and authentication.
We’ll use Bitnami’s Helm chart for Keycloak, which makes the installation and configuration process straightforward.
kubectl create ns keycloak
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install keycloak bitnami/keycloak -n keycloak --set auth.adminUser=admin --set auth.adminPassword=admin123 --set postgresql.enabled=true --set postgresql.auth.postgresPassword=admin123 --set postgresql.auth.username=keycloak --set postgresql.auth.password=keycloak123 --set postgresql.auth.database=keycloak
After a while to verify installation:
kubectl get pods -n keycloak
2. Installing Capsule
Capsule is a Kubernetes multi-tenancy operator that helps isolate workloads between teams while sharing the same cluster. In this step, we’ll install Capsule and configure it to recognize three specific user groups the default one capsule.clastix.io, group-a and group-b.
Save the following as capsule-values.yaml
This file contains the full configuration for Capsule. It defines security contexts, CRD behaviour, user group access, and more.
global:
jobs:
kubectl:
ttlSecondsAfterFinished: 60
manager:
options:
forceTenantPrefix: true
capsuleUserGroups: ["capsule.clastix.io", "group-a", "group-b"]
Install Capsule with configuration:
helm repo add projectcapsule https://projectcapsule.github.io/charts
helm repo update
kubectl create ns capsule-system
helm install capsule projectcapsule/capsule -n capsule-system --version 0.7.4 -f capsule-values.yaml
After a while to verify installation:
kubectl get pods -n capsule-system
You should see the Capsule manager pod running in the capsule-system namespace.
3. Install kubelogin
To create the kubectl contexts dynamically when authenticating via OIDC we will need to install kubelogin, which works as an add-on of our kubectl tool. The installation instructions can be found here.
Setting up OIDC configuration with Minikube + Keycloak + Kube OIDC Login
1. Install Ingress Controller in Minikube
In order to provide an HTTPS secure OIDC_ISSUER_URL to our Minikube cluster API Server, we will need first to configure our minikube installation with an ingress controller enabled.
While the minikube cluster is up and running.
minikube addons enable ingress
After a while an ingress controller will be installed in our minikube cluster.
2. Install mkcert and create a local certificate
Mkcert is a zero-config tool that will allow us to create a local certificate.
After installing we will use it in order to create a certificate for keycloak.local.
mkcert -cert-file tls.crt -key-file tls.key keycloak.local
3. Reconfigure Keycloak to include Ingress configuration
With the certificate "at hand" we will update our keycloak installation to include ingress configuration.
But first let us create a tls secret for the certificate
kubectl create secret tls keycloak-tls --cert=tls.crt --key=tls.key --namespace=keycloak
And afterwards update the existing keycloak configuration.
helm upgrade keycloak bitnami/keycloak -n keycloak --set auth.adminUser=admin --set auth.adminPassword=admin123 --set postgresql.enabled=true --set postgresql.auth.postgresPassword=admin123 --set postgresql.auth.username=keycloak --set postgresql.auth.password=keycloak123 --set postgresql.auth.database=keycloak --set ingress.enabled=true --set ingress.ingressClassName=nginx --set ingress.tls=true --set ingress.extraTls[0].hosts[0]=keycloak.local --set ingress.extraTls[0].secretName=keycloak-tls
Now our Keycloak server is exposed but our browser needs to somehow recognise the minikube ip as keycloak.local. That is achieved by editing the C:\Windows\System32\drivers\etc\hosts file and adding a line in the following format "Minikube IP keycloak.local". You can get the minikube ip by using the following command.
minikube ip
After a brief moment you should be able to see keycloak login page in your browser at https://keycloak.local.
4. Create our test realm and user
Since we can view our keycloak installation front-end, we will use it to create our first test user. (Remember that username is admin and password is admin123) But first we will need to create a test realm, in order to do that we will navigate as following Manage realms>Create Realm. Then fill out the form:
Afterwards we wil navigate to Users>Add User and submit the creation form as follows:
Ok so having done that we need to configure a password for our user, by nagivating to Users>Our user>Credentials>Set Password where we will add our password as follows:
Important Notice: Keycloak is a very active project and these instructions may be outdated at time of reading.
5. Create a Kubernetes client
In Keycloak, a client represents an application or service that wants to authenticate users or access protected resources.
Clients can be web applications, mobile apps, APIs, or any system that needs to integrate with Keycloak for authentication and authorization. Each client is configured with specific settings like redirect URIs, authentication flows, and access permissions that define how it can interact with Keycloak's identity and access management features.
So we will create a client name Kubernetes. By clicking on Clients>Create Client we will create the client as follows page per page.
6. Create a Kubernetes client dedicated mapper
A Keycloak mapper dedicated to one client is a configuration that defines how user data (like roles, attributes, or groups) is included in tokens only for a specific client. It customizes the token content that the client receives, without affecting others.
First of all we need to navigate to Clients>kubernetes>Client scopes>kubernetes-dedicated>Configure a new mapper. There will select group membership and fill it out as follows:
Afterwards we will repeat the process and select audience and fill it out as follows:
7. Test our user and client setup
In order to execute this step we will need first to export some variables.
export KEYCLOAK=keycloak.local
export REALM=demo
export OIDC_ISSUER=${KEYCLOAK}/realms/${REALM}
And then execute the command below. Keep in mind that you can find your CLIENT_SECRET by navigating to Clients > Kubernetes > Credentials and copy it to your clipboard.
curl -k -s https://${OIDC_ISSUER}/protocol/openid-connect/token \
-d grant_type=password \
-d response_type=id_token \
-d scope=openid \
-d client_id=kubernetes \
-d client_secret=${OIDC_CLIENT_SECRET} \
-d username=test \
-d password=test | jq
The expected result is like the one below:
{"access_token":"**token gibberish**","not-before-policy":0,"session_state":"e9cfe1a8-5d84-41db-a2ef-0cac8aa7787d","scope":"openid email audience groups profile"}
Important: It is critical that you see groups and audience in the request's response. We will leverage this info later for Capsule integration.
8. Configure Minikube API Server to use our Keycloak server as its OIDC Issuer
To authenticate our users based on Keycloak's response we will need to make our Kube API server to trust Keycloak.
First things first we will need to create a custom directory in our minikube node.
minikube ssh -- sudo mkdir -p /var/lib/minikube/certs/custom
After that we will need to copy the tls.crt file ,that we used as a certificate, to our minikube node.
minikube cp /path/to/tls.crt /var/lib/minikube/certs/custom/tls.crt
Finally we will restart our minikube cluster with our new configuration.
minikube start --extra-config=apiserver.oidc-issuer-url=https://keycloak.local/realms/demo --extra-config=apiserver.oidc-username-claim=preferred_username --extra-config=apiserver.oidc-ca-file=/var/lib/minikube/certs/custom/tls.crt --extra-config=apiserver.oidc-groups-claim=groups --extra-config=apiserver.oidc-username-prefix=- --extra-config=apiserver.oidc-client-id=kubernetes
For more details on the matter of minikube oidc connect you can find information here.
9. Connect to the cluster via kube oidc login
Now it is time to validate if we can login via kube oidc-login to our cluster via Keycloak.
kubectl oidc-login setup --oidc-issuer-url=https://keycloak.local/realms/demo --oidc-client-id=kubernetes --oidc-client-secret=$OIDC_CLIENT_SECRET --certificate-authority=./tls.crt
If you were prompted to visit localhost:8000 and authenticated with username test and password test. Then congrats you have succesfully connected your kubectl to the minikube cluster via Keycloak. That is great, but we are not done yet. Now it is time to setup the tenancy-side of things.
Configuring Cluster Tenancy
Back when we configured Capsule we specified 3 different capsuleUserGroups in our YAML configuration (capsule-values.yaml).
These 3 groups are the key to partioning the cluster. So we will leverage them in order to complete our endeavour.
1. Create Keycloak User Groups
These 3 groups should not only be part of Capsule but also of Keycloak, therefore we will navigate to Groups>Create Group. We will create a group called capsule.clastix.io. After creating the group we will click capsule.clastix.io and create two child groups one called group-a and called group-b.
2. Create Capsule Tenants
A tenant is Capsule's way of partitioning the cluster and designating partition (tenant) admins. More information about the kubernetes resource can be found here. We will create two tenants one called group-a and one called group-b. Copy the code blocks below into a yaml file and then use:
kubectl apply -f /path/to/file
---
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
name: group-a
spec:
owners:
- name: group-a
kind: Group
---
apiVersion: capsule.clastix.io/v1beta2
kind: Tenant
metadata:
name: group-b
spec:
owners:
- name: group-b
kind: Group
3. Create our tenant admins
Our test user has proven invaluable so far, but we will need to create two more users in our keycloak demo realm. You can follow the exact same process for our new users with the only addition being that you can make them join groups on the user creation form. Choose the group that corresponds to their name accordingly. The article will be referencing the two new users from now on as group-a-admin and group-b-admin.
4. Login as group-a admin
In order to login as group-a tenant admin initiate the OIDC Login process with the same command as before from your terminal. In the login page use the group-a credentials to login. You will be prompted to run the following command by kubelogin.
kubectl config set-credentials oidc \
--exec-api-version=client.authentication.k8s.io/v1 \
--exec-interactive-mode=Never \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg="--oidc-issuer-url=https://keycloak.local/realms/demo" \
--exec-arg="--oidc-client-id=kubernetes" \
--exec-arg="--oidc-client-secret=mVBu9OyoBX6YPmuD0TgwZtNRHKjNAoc9" \
--exec-arg="--certificate-authority=./tls.crt"
This command will setup user credentials for our oidc user which we can use to login as anybody that we have the credentials for. But before testing it we need to configure our kubectl context. Here is the kubectl command to configure it.
kubectl config set-context oidc@minikube --cluster='minikube' --namespace='default' --user='oidc'
Verify the login by first changing your kubectl context:
kubectl config use-context oidc@minikube
And then running the following command:
kubectl create ns test
If the result was the following:
Error from server (Forbidden): admission webhook "namespaces.projectcapsule.dev" denied the request: The namespace doesn't match the tenant prefix, expected group-a-test
Then congrats you have managed to configure Tenancy in the cluster.
Experimenting with our solution
First of all, let us start by creating a group-a tenant namespace.
kubectl create ns group-a-test
Let us give a go to creating an nginx deployment in our new group-a namespace.
kubectl create deployment test-deployment --image=nginx -n group-a-test
Awesome now let us see if another tenant can interact with our nginx deployment in the group-a tenant. Use
kubectl oidc-login clean
To remove your token and session from kubectl. And go to Sessions in Keycloak in order to remove the existing group-a session.
Now execute the following command which will prompt you to re-login.
kubectl get pods -A
Login as the group-b tenant admin and try the following command:
kubectl delete deployment test-deployment -n group-a-test
If you get the following error:
Error from server (Forbidden): deployments.apps "test-deployment" is forbidden: User "group-b" cannot delete resource "deployments" in API group "apps" in the namespace "group-a-test"
The tenancy has been successfully set up. Now the possibilities are endless you can create:
- Many tenants and many admins.
- Make users part of many groups.
- Create tenant admins that are service accounts for automation pipelines.
- Create cluster wide admins groups.
- Create different roles that tenant owners will adopt to restrict permissions.
This solution maybe be a little bit configuration-heavy but once setup it is as pliable as play-doh. So have fun experimenting!
In case you are looking for an environment where learning and experimenting with new solutions is key, we invite you to explore our current job opportunities and be part of Agile Actors.
Top comments (0)