DEV Community

Jan Schulte for Outshift By Cisco

Posted on

KubeClarity in Action - Image Scanning as part of your CI workflow

Imagine this: While you drink your morning coffee, you see this in the news:

News Headline Heartbleed OpenSSL Vulnerability detected

You realize instead of working on new features,
today will be about damage control and fixing production applications.
While a vulnerability like Heartbleed received a lot of attention, many more are similar in severity but don't get the same amount of publicity.
It raises the question: "How can you enable the organization to ship more secure applications by default"?
In a previous post, I talked about KubeClarity and how it can help you ship more secure Docker images.
Today, we explore how you can integrate KubeClarity into your CI workflow to automate vulnerability scanning.

The Setup

The setup for this blog post consists of:

We're using a self-hosted runner because, as of the time of publishing this post, KubeClarity does not have an authentication mechanism yet. Therefore, we're hosting the GH Actions runner in the same cluster as KubeClarity to guarantee safe access.

Preparation

GitHub Self-hosted Runner

To kick things off, we're using https://github.com/actions/actions-runner-controller/blob/master/docs/quickstart.md to install the runner.

If not present, let's start by installing cert-manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.2/cert-manager.yaml
Enter fullscreen mode Exit fullscreen mode

Next, generate a Personal Access Token for the runner to authenticate with GitHub.

  • Login to your GitHub account and go to settings, navigate to "Create new token".
  • Provide a name and select the repo scope.
  • Click Generate token.

The first step is to install the controller. The controller ensures that there are always enough runners available.

In your terminal, run the following:

helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller
helm upgrade --install --namespace actions-runner-system --create-namespace\
  --set=authSecret.create=true\
  --set=authSecret.github_token="<PASTE YOUR TOKEN HERE>"\
  --wait actions-runner-controller actions-runner-controller/actions-runner-controller
Enter fullscreen mode Exit fullscreen mode

(Make sure to include your token before you run the code snippet)

With the controller in place, we deploy the runner. The runner is coupled to a GitHub repository and deployed by a separate YAML configuration.
Create a runnerdeployment.yaml and paste the following YAML contents into it:

apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
  name: example-runnerdeploy
spec:
  replicas: 1
  template:
    spec:
      repository: schultyy/kubeclarity-ci
Enter fullscreen mode Exit fullscreen mode

Then apply the file:

kubectl apply -f runnerdeployment.yaml
Enter fullscreen mode Exit fullscreen mode

That concludes our GitHub Actions Runner deployment! 🎉

KubeClarity

KubeClarity is a tool that helps with the generation of the Software Bill of Materials (SBOM), as well as detecting vulnerabilities in container images.

We install KubeClarity using Helm.

  1. Add the Helm repo:
helm repo add kubeclarity https://openclarity.github.io/kubeclarity
Enter fullscreen mode Exit fullscreen mode
  1. Save the KubeClarity default chart values:
helm show values kubeclarity/kubeclarity > values.yaml
Enter fullscreen mode Exit fullscreen mode
  1. Inspect values.yaml if you need to make any changes.
  2. Deploy KubeClarity with Helm:
helm install --values values.yaml --create-namespace kubeclarity kubeclarity/kubeclarity -n kubeclarity
Enter fullscreen mode Exit fullscreen mode
  1. To access the dashboard, we will use a port-forward:
kubectl port-forward -n kubeclarity svc/kubeclarity-kubeclarity 9999:8080
Enter fullscreen mode Exit fullscreen mode
  1. Open the dashboard in the browser: http://localhost:9999

Create a new application

KubeClarity manages vulnerability and scan information by application. Before we start checking, we need to create a new application.
Visit http://localhost:9999/applications and click New application.

Fill out the dialogue:

  • Name: Your application name
  • Type: Pod
  • Labels: app=<your application name>

and click Create application.

New Application Dialogue

Next, copy the ID from the list view:

Application List View

We will need this ID to set up our CI job.

CI Job

You might already have a CI job configured for your application. The following workflow works in parallel with any existing workflows.

Please note:

  • Make sure to fill in the correct value for KUBECLARITY_APP_ID
  • Create a new GitHub Personal Access Token with the write:packages scope. Configure it as GH_TOKEN in Actions secrets and variables in your repository settings.

In .github/workflows, create buildimage.yml:

name: Build Docker Image

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

env:
    BACKEND_HOST: kubeclarity-kubeclarity.kubeclarity.svc.cluster.local:8080
    BACKEND_DISABLE_TLS: true
    KUBECLARITY_APP_ID: <PASTE YOUR APP ID HERE>

jobs:
  buildimage:
    runs-on: self-hosted
    steps:
      -
        name: Checkout
        uses: actions/checkout@v3
      -
        name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      - name: Log in to the Container registry
        uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GH_TOKEN }}
      - name: Extract metadata (tags, labels) for Docker
        id: meta
        uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
        with:
          images: ghcr.io/schultyy/kubeclarity-ci
          tags: type=sha
      - name: Build and push Docker image
        uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
      - name: Install KubeClarity
        run: |
          curl -LO https://github.com/openclarity/kubeclarity/releases/download/v2.19.0/kubeclarity-cli-2.19.0-linux-amd64.tar.gz && tar -xzvf kubeclarity-cli-2.19.0-linux-amd64.tar.gz
          sudo mv kubeclarity-cli /usr/local/bin/kubeclarity-cli
      - name: Scan the image
        run: kubeclarity-cli scan ghcr.io/schultyy/kubeclarity-ci:main --cis-docker-benchmark-scan --application-id $KUBECLARITY_APP_ID -e
Enter fullscreen mode Exit fullscreen mode

The first part of this workflow consists of the usual steps to build and push a new Docker image to GitHub's Docker image registry.
What's different, however, is the last few steps where we run KubeClarity on the newly built image:

    # ....
      - name: Install KubeClarity
        run: |
          curl -LO https://github.com/openclarity/kubeclarity/releases/download/v2.19.0/kubeclarity-cli-2.19.0-linux-amd64.tar.gz && tar -xzvf kubeclarity-cli-2.19.0-linux-amd64.tar.gz
          sudo mv kubeclarity-cli /usr/local/bin/kubeclarity-cli
      - name: Scan the image
        run: kubeclarity-cli scan ${{ steps.meta.outputs.tags }} --cis-docker-benchmark-scan --application-id $KUBECLARITY_APP_ID -e
Enter fullscreen mode Exit fullscreen mode

We install the KubeClarity CLI from GitHub into /usr/local/bin.
Next, we run it to scan the newly created image. The CLI has a few different modes it can run in; we're choosing to analyze an existing Docker image. Note how we're scanning the image in the repository, in this case, ghcr.io.
We also specify --cis-docker-benchmark-scan to get feedback on image-building best practices.

KubeClarity also needs a few environment variables set:

env:
    BACKEND_HOST: kubeclarity-kubeclarity.kubeclarity.svc.cluster.local:8080
    BACKEND_DISABLE_TLS: true
    KUBECLARITY_APP_ID: <PASTE YOUR APP ID HERE>
Enter fullscreen mode Exit fullscreen mode

With these steps in place, once you merge a pull request into the main branch, we automatically perform a scan on the newly built Docker image.

See the results

To test everything, push a new commit. Once a CI job has finished, we can explore the scan results.
Open http://localhost:9999/applicationResources and click on the resource with the latest built image hash.

Application Resources Overview

On the detail view, we see scan details, for instance, how many vulnerabilities this current Docker image contains. We also have a chance to dive deeper into the CIS Docker Benchmark.

Application Resources Detail View

CIS Benchmark Review

What's next?

KubeClarity allows you to gather information about new Docker images as they get built.
The dashboard shows you at a glance how many severe vulnerabilities are currently present.

With this workflow in place, you come one step closer to reducing the effort to ship less vulnerable code.

How do you integrate KubeClarity into your CI/CD workflow? Comment below!

Top comments (0)