DEV Community

Chris White
Chris White

Posted on

Jenkins Agents On Kubernetes

Jenkins is a Java based CI/CD system that can be self hosted. In order to initiate builds, Jenkins utilizes a component called an agent to execute build commands. These agents can be a constantly running service or part of an on demand service such as a cloud provider or Docker containers. Kubernetes is one such solution to deploying build agents on demand. This article will look at how to setup Kubernetes as a provider for Jenkins build agents.

Jenkins Preparation

Long lived credentials can be problematic to deal with when setting up authentication. In some cases there isn't even an expiration for the credentials at all. To work around this we're going to setup Jenkins to be an OpenID Connect (OIDC) provider.

Plugin Setup

Jenkins doesn't provide this functionality out of the box. A plugin will be required to enable it. We'll go ahead and install this plugin, along with the Kubernetes plugin that will be used later:

  1. Go to Manage Jenkins at the instance root
  2. Click "Plugins" under "System Configuration"
  3. Click "Available plugins" on the left
  4. Search for "kubernetes"
  5. Select the plugin with the description "This plugin integrates Jenkins with Kubernetes"
  6. Clear the search box and enter "oidc"
  7. Select the "OpenID Connect Provider" plugin
  8. Click "Install" in the upper right

OIDC Claim Templates

Before doing the actual setup I'd like to detail on how Jenkins provides OIDC authentication:

  1. Go to Manage Jenkins in the instance root
  2. Click on "Security" under the "Security" category
  3. While we're here, scroll down to "Agents"
  4. Click on "Random" for TCP port for inbound agents as this will be required for the JNLP agent to work in our Kubernetes setup
  5. Now scroll down to "OpenID Connect"
  6. Expand "Claim templates"

So here there are a number of variables with per build and global scope. For this use case I simply want basic access to handle pods as build agents, so I won't be doing anything build scoped. Instead I'll be working off the subject or sub which is set to the URL of the Jenkins instance. This can be any variable that Jenkins is aware of, including ones introduced by plugins. Basic global ones can be found in the Jenkins documentation. Go ahead and click on "Save" at the bottom when you're finished inspecting the claim template to ensure the agent port is applied.

Creating an OIDC token

An OICD token acts as the entity that Kubernetes will utilize to call back and confirm the token. It works much like a standard credential but with a few other attributes:

  1. Go to Manage Jenkins at the instance root
  2. Select "Credentials" under the "Security" category
  3. Click "(global)" under the "Stores scoped to Jenkins" section at the bottom
  4. Click "Add Credentials" at the upper right
  5. Fill out the following:
  • Kind: OpenID Connect id token
  • Scope: System
  • Issuer: (leave blank)
  • Audience: [unique value of some kind]
  • ID: jenkins-kubernetes-integration
  • Description: OpenID Connect Credentials For Kubernetes

So the important part here is the Audience value. This will be used as a way for Kubernetes to identify itself to the system. Make sure it's unique for you and note it down somewhere for later use. As we've completed as much that can be on the Jenkins side, it's time to configure Kubernetes.

Kubernetes Setup

The OIDC setup will actually offload a decent amount of authentication setup that we'd normally have to do. You don't even need a service account! However, before we can do anything with OIDC we'll need to set it up so the kube-apiserver understands it.

OIDC Setup

Note: There's actually a Structured Authentication Config established via KEP-3331. It's in v1.28 as a feature flag gated option and removes the limitation of only having one OIDC provider. I may look into doing an article on it, but for now I'll deal with the issue in a manner that should work even with a bit older versions versions of Kubernetes.

In order for OIDC to work the apiserver needs to know it exists. This can be achieved via command line options. So go ahead and open:

$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml

This will bring up YAML configuration for the API server. The command section has a decent amount of arguments which we'll be adding to:

    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --oidc-issuer-url=https://jenkins:8080/oidc
    - --oidc-client-id=[unique value of some kind]
    - --oidc-username-claim=sub
    - --oidc-username-prefix="oidc:"
Enter fullscreen mode Exit fullscreen mode

So first off is the OIDC issuer URL, which is the Jenkins instance root + /oidc. oidc-client-id is the unique value that was setup as Audience during the OpenID Connect token creation (yes, it's not the credential ID). Remember the OIDC claim template we were looking at? That's what this is about. --oidc-username-claim indicates the value that will be used as User when dealing with any authorization components. In this case it will be https://jenkins:8080/ as that's what my Jenkins instance URL is. The prefix exists to avoid conflicting with an actual in-system user's name (service account for example). Unfortunately, because it's YAML we have to enclose it in double quotes to avoid parsing issues (this will lead to an interesting issue later). Once this is done simply save and exit. If the YAML file doesn't have any issues the process list should have it with the new options.

Namespace Creation

Now it's time to work with setting up some permissions. To help have fine grained permissions we'll use the namespace feature for scope purposes. Namespaces allow for a way to isolate Kubernetes resources. Here is an example of namespaces for a cluster I operate:

$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   35h
kube-node-lease   Active   35h
kube-public       Active   35h
kube-system       Active   35h
Enter fullscreen mode Exit fullscreen mode

default is where any actions which require a namespace will go into if one is not explicitly defined in a default setup (tools such as kubens can alter this behavior). In the context of Jenkins, namespaces are a useful way to allow isolation of individual Jenkins instances that want to utilize the same Kubernetes cluster. Creation of a namespace is a simple option to kubectl:

$ kubectl create namespace jenkins
namespace/jenkins created
$ kubectl get namespaces
NAME              STATUS   AGE
default           Active   36h
jenkins           Active   3s
kube-node-lease   Active   36h
kube-public       Active   36h
kube-system       Active   36h
Enter fullscreen mode Exit fullscreen mode

Now it's time to establish some actual permissions which will be bound to this namespace.

Role Creation

The permissions here will be enough to maintain and monitor agent pods. We'll also use a RoleBinding to attach it to our OIDC entity:

jenkins-role.yaml

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: jenkins-oidc
  namespace: jenkins
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: jenkins-oidc-binding
  namespace: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins-oidc
subjects:
- kind: User
  name: "\"oidc:\"https://jenkins:8080/"
  namespace: jenkins

Enter fullscreen mode Exit fullscreen mode

So here the Role, RoleBinding, and subject of the binding are all namespace bound to the jenkins namespace. Due to oidc: being enclosed in "'s the quotes themselves are actually part of the literal value. This makes up for an odd case where we have to escape quotes so they appear properly. After that is the Jenkins URL which maps to the sub value that we're getting our username from via --oidc-username-claim=sub. All that's left is to apply the YAML and create the resources:

$ kubectl apply -f jenkins-role.yaml

Jenkins Kubernetes Setup

Now it's time to setup our Kubernetes cluster as an agent cloud. Before beginning make sure to grab the value of the Kubernetes server certificate via:

$ kubectl config view --raw -o go-template='{{index ((index (index .clusters 0) "cluster")) "certificate-authority-data"|base64decode}}'

This is used so the Jenkins system knows the API server's certificate is trusted and won't complain about SSL verification errors. Now for the cluster setup:

  1. Go to Manage Jenkins at the instance root
  2. Click "Clouds" under "System Configuration"
  3. Click "New Cloud"
  4. Enter "Kubernetes Cluster" as the Cloud name
  5. Select "Kubernetes" as the Type
  6. In the new page, expand "Kubernetes Cloud details"
  7. Enter in the following:
  • Enter the API server's URL, https://k8-control-plane:6443/ as an example
  • Enter the Kubernetes CA cert that was obtained earlier
  • Enter "jenkins" under "Kubernetes Namespace"
  • For "Credentials", select the OpenID Connect credentials we created

Click Test Connection to confirm everything works. If it does, go ahead and click Save. If something isn't working make sure the Kubernetes API server is up and has the OIDC arguments. Also validate your Role and RoleBinding in case permissions are listed as the issue.

Agent Setup

This is actually an optional step and mostly tied to my particular setup. My Jenkins instance is resolved via /etc/hosts and has an SSL certificate signed by a self-managed Root CA. This leads to two issues:

  1. The Jenkins agent works off the Java certificate trust store which doesn't know about my root CA cert out of the box
  2. The Kubernetes cluster works off it's own DNS, and even when DNS is customized it expects a domain name (which jenkins doesn't have).

So time to address these two issues with their own resolutions.

Agent Customization

To handle the certificate issue the root certificate for my CA will need to be trusted by the agent's Java CA store. While probably not necessary I'll also added it to the system's trust store. In this case I'll be using a certificate bundle which is a combination of the Root CA and Intermediate CA's certificate. While sometimes the root CA is good enough I tend to find that some software will complain if the intermediate isn't also present. So I'll go ahead and make a docker file with the certificate chain in the same folder:

FROM jenkins/inbound-agent:3148.v532a_7e715ee3-1-jdk11
COPY ca-chain.crt /usr/local/share/ca-certificates/
USER root
RUN apt-get update
RUN apt-get upgrade -y
RUN update-ca-certificates
RUN ${JAVA_HOME}/bin/keytool -import -trustcacerts -cacerts -noprompt -storepass changeit -alias jenkins-ca -file /usr/local/share/ca-certificates/ca-chain.crt
Enter fullscreen mode Exit fullscreen mode

Here I'm using a specific version tag of jenkins/inbound-agent, which is the container image used for Jenkins agents that are part of the Kubernetes plugin workflow. This means it will be spun up regardless of what pipeline declarations you have. So some important things to note here:

  • /usr/local/share/ca-certificates is the locations where certs need to be for update-ca-certificates to make them trusted root certs
  • The reallly long keytool declaration is what pulls the certificate into the Java cacerts trust store so the Jenkins java agent recognizes my Jenkins SSL certificate while in the Kubernetes cluster
  • System packages are updated to ensure software isn't too outdated (though to be fair Jenkins agents are put out fairly frequently)

Agent Build

Now since Kubernetes works off of containerd I'll be taking a different approach on handling container builds by using nerdctl and the buildkit that comes bundled with it. I'll do this on the amd64 control plane node since it's beefier than my Raspberry Pi workers for handling builds and build related services. Go ahead and download and unpack the latest nerdctl release as of writing (make sure to check the release page in case there's a new one):

$ wget https://github.com/containerd/nerdctl/releases/download/v1.5.0/nerdctl-full-1.5.0-linux-amd64.tar.gz
$ sudo tar Cxzvf /usr/local nerdctl-full-1.5.0-linux-amd64.tar.gz
Enter fullscreen mode Exit fullscreen mode

Now we can start the buildkitd daemon to prepare for building the container:

$ sudo systemctl enable buildkit --now

buildkit is recognized as a service due to /usr/local/lib/systemd/system being in the unit path:

$ sudo systemctl --no-pager --property=UnitPath show | tr ' ' '\n'
UnitPath=/etc/systemd/system.control
/run/systemd/system.control
/run/systemd/transient
/run/systemd/generator.early
/etc/systemd/system
/etc/systemd/system.attached
/run/systemd/system
/run/systemd/system.attached
/run/systemd/generator
/usr/local/lib/systemd/system
/lib/systemd/system
/usr/lib/systemd/system
/run/systemd/generator.late
Enter fullscreen mode Exit fullscreen mode

Next up is that containerd has the concept of namespaces much like Kubernetes. You can see an example here:

# sudo ctr namespaces list
NAME     LABELS
buildkit
default
k8s.io
moby
Enter fullscreen mode Exit fullscreen mode

moby is usually the docker applied namespace, while k8s.io is what's used by Kubernetes. This means if we want Kubernetes to recognize our built agent image, we need to make sure it's in the k8s.io namespace. Also I'll build for both amd64 and arm64 platforms as my workers are Raspberry Pi, and just in case I decide to support more agents by allowing pods on my control plane. The end result looks something like this:

$ sudo nerdctl build --namespace k8s.io --platform=amd64,arm64 -t k8s.io/test:v1.0 .
$ sudo ctr --namespace k8s.io images list | grep k8s.io/test
k8s.io/test:v1.0
Enter fullscreen mode Exit fullscreen mode

Agent Deployment

Now there's a slight problem in that right now only my control plane has a copy of this image. The way Kubernetes works is that each node has its own containerd instance. This means that a worker node can't see the containerd images of the control plane/other worker nodes, so I'll need to make sure all nodes know about the image. One solution would be to run a local image registry. However, given that it's a two worker cluster I'll take a somewhat more simplistic route:

worker_image_deploy.sh

#!/bin/bash
TARGET="k8s.io/${1}"
echo "Building image ${TARGET}"
sudo nerdctl --namespace k8s.io build --platform=amd64,arm64 -t "${TARGET}" .
echo "Exporting image ${TARGET}"
sudo ctr --namespace=k8s.io image export --platform linux/arm64 image.tar "${TARGET}"
for host in rpi rpi2
do
        echo "Deploying image to ${host}"
        scp image.tar ${host}:~/
        echo "Importing image ${TARGET}"
        ssh ${host} sudo ctr --namespace=k8s.io images import --base-name "${TARGET}" image.tar
        ssh ${host} sudo rm image.tar
done

sudo rm image.tar
Enter fullscreen mode Exit fullscreen mode

So what this does is build the image in the k8s namespace given a certain target name. Then it will export the image, copy it to each worker, and import it to the worker's containerd. It also cleans up after the images so they don't duplicate space. rpi and rpi2 are special hosts I have setup so you'll want to replace them appropriately with whatever worker hosts/IPs/SSH connection string you have. Now to build out the Jenkins agent:

$ ./worker_image_deploy.sh jenkins-agent:v1.0
<build output spam snip>
Exporting image k8s.io/jenkins-agent:v1.0
Deploying image to rpi
image.tar                                                                                                                                                                                                                                                                                                      100%  163MB  44.7MB/s   00:03
Importing image k8s.io/jenkins-agent:v1.0
unpacking k8s.io/jenkins-agent:v1.0 (sha256:eae87bca0014a6f6f6bc24bd5c9e4a93a454909b82e7f73cfedfa60db2c5260c)...done
Deploying image to rpi2
image.tar                                                                                                                                                                                                                                                                                                      100%  163MB  74.9MB/s   00:02
Importing image k8s.io/jenkins-agent:v1.0
unpacking k8s.io/jenkins-agent:v1.0 (sha256:eae87bca0014a6f6f6bc24bd5c9e4a93a454909b82e7f73cfedfa60db2c5260c)...done
Enter fullscreen mode Exit fullscreen mode

If I check both my workers:

# rpi
$ sudo ctr --namespace k8s.io images list | grep k8s.io/jenkins-agent:v1.0
k8s.io/jenkins-agent:v1.0

# rpi2
$ sudo ctr --namespace k8s.io images list | grep k8s.io/jenkins-agent:v1.0
k8s.io/jenkins-agent:v1.0
Enter fullscreen mode Exit fullscreen mode

Both of them now recognize the agent image and can interact with it for pod purposes.

Jenkins Agent Configuration

Now that our custom agent is built we've taken care of the certificate issue. It's time to make our new image the designated JNLP agent, and then solve the host issue:

  1. Go to Manage Jenkins at the instance root
  2. Select "Clouds" under "System Configuration"
  3. Select "Kubernetes Cluster" (or the name you chose for it)
  4. Select "Configure" on the left
  5. Expand "Kubernetes Cloud details"
  6. Expand "Advanced"
  7. Enter "jnlp" as "Defaults Provider Template Name"
  8. Expand "Pod Templates"
  9. Click on "Add Pod Template"
  10. Name it "jnlp"
  11. Expand "Pod Template details"
  12. Enter k8s.io/jenkins-agent:v1.0 for "Docker image"
  13. Ensure "Always pull image" is not selected, so it doesn't try to pull our local image from Docker Hub
  14. Go down to "Raw YAML for the Pod"
  15. Enter the following and make sure "Merge" is set for "Yaml merge strategy":
spec:
  hostAliases:
  - ip: [ip address of jenkins host]
    hostnames:
    - "jenkins"
Enter fullscreen mode Exit fullscreen mode

Be sure to replace [ip address of jenkins host] with what your Jenkins instance would resolve to from a worker DNS point of view. Now go ahead and click "Save" at the bottom.

Jenkins Pipeline Test

It's finally time to test that Jenkins pipelines are able to work off the Kubernetes cluster. Go to the Jenkins instance root:

  1. Select "New Item"
  2. Enter "Kubernetes Pipeline Test" for the name
  3. Select "Pipeline" for the type
  4. Select "OK" at the bottom
  5. Scroll down to "Pipeline" on the next screen and enter the following for the pipeline script:
podTemplate(containers: [
    containerTemplate(name: 'busybox', image: 'busybox:latest', command: 'sleep', args: '99d')
  ]) {
    node(POD_LABEL) {
        stage('Test') {
            container('busybox') {
                sh 'echo "test"'
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Finally click "Save" at the bottom. Now we'll start a build with "Build Now" on the left. If everything went well you'll see something similar to the following output:

A page showing the stage view of the pipeline with a successful build

So what happened is our new JNLP agent orchestrated the build process with proper certificate validation and host resolution. Then the pipeline initiated a busybox container that simply echo'ed "test". The node(POD_LABEL) is left as is and will be dynamically replaced appropriately. Pod definitions are declared through podTemplate which has two main forms:

  • An object like form with a list of containerTemplates
  • Straight YAML declaration

We've already seen the object like form, so here's an example of the YAML form:

podTemplate(yaml: '''
    apiVersion: v1
    kind: Pod
    metadata:
      labels: 
        some-label: some-label-value
    spec:
      containers:
      - name: busybox
        image: busybox
        command:
        - sleep
        args:
        - 99d
    ''') {
    node(POD_LABEL) {
      container('busybox') {
        sh 'echo "test"'
      }
    }
}
Enter fullscreen mode Exit fullscreen mode

The Jenkins documentation has a list of supported properties. There's also the yamlMergeStrategy present if you want to combine methods. I'll take a quick look at what option might suit you. Though in the end work with what's best for you and/or your team.

Object Like Pattern

This is generally what I recommend going with if possible. The main pain point is the lack of pod properties you'd expect for a more involved Kubernetes pod definition. On the other hand, being object like means they can be programmatically generated with ease. This makes them very ideal for situations with global pipeline libraries. It's also much more terse code wise than a full YAML pod definition.

YAML Pod Definition

This is a straight Kubernetes Pod definition. The big takeaway is that they can be very verbose, which means more lines of code to scroll though. Putting them in a file to be loaded is an option, though that becomes a "yet another place to look for code" issue. If you're coming from more of a Kubernetes background though it might suit you better. It will be necessary if your Pod definition has strict requirements that the containerTemplate method can't meet.

Conclusion

Much like the previous Kubernetes installment this one was equally as involved. The fun included things like:

  • Trying to get the oidc: prefix to work because colons in YAML is annoying
  • Realizing CoreDNS resolution wasn't going to work out because my Jenkins instance isn't in a Kubernetes cluster ( which is more painful to setup than what I have now )
  • Figuring out the JNLP container was a thing and that I'd have to custom build it
  • Trying to make containerd image builds work without the help of Docker
  • Packages for buildkit? It's complicated...
  • Really keytool? Why don't you just use my system trust store? What on earth is this 10 mile long command just to import a root CA cert?
  • Figuring out how to make host resolution container properties work
  • Not realizing Kubernetes Roles needed a namespace. Thought it was just the binding
  • Trying to make cert auth with kubeconfig before I knew OIDC was a thing (painful)
  • Trying to scroll through pages of lifecycle bot of doom and additional comments required to unmark stale issues when checking anything Kubernetes related

Well I know more now I suppose! Hope you enjoyed this article and I'll be off to think on what I want to write on next (probably should get back to my Python series since the Kubernetes learning urge has calmed down a bit).

Top comments (0)