DEV Community

Ipoffiong
Ipoffiong

Posted on • Edited on

Using Kaniko to Build and Publish container image with Github action on Github Self-hosted Runners

Creating container images using Docker in workflows running on self-hosted runners deployed on a Kubernetes cluster can be tricky due to security concerns with Docker-in-Docker (dind) and its need for privileged mode.
To address this issue, you can use Kaniko, a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster. Unlike Docker-in-Docker, Kaniko doesn't rely on a Docker daemon and executes each Dockerfile command in userspace, making it suitable for environments, like a standard Kubernetes cluster, where running a Docker daemon securely is challenging.
To get started with Kaniko for your container image builds, follow the steps to set up a GitHub Self-hosted runner on a Kubernetes cluster. If you're already familiar with GitHub Self-hosted Runner, you can directly move to the installation section.

Github Self-hosted Runner

Github allows you to host your own runners that you can use to run your github action workflows. These runners are called Self-hosted runners. With self-hosted runners, you can customise your runner environments with the software tools that your applications needs, install software available on your local network, etc. The runners can be deployed on on-prem servers, virtual machines or even in a container. The self-hosted runners can be added at different github management hierachies:

  • Repository-level runners are dedicated to a single repository.(This will be used for this post)
  • Organization-level runners can process jobs for multiple repositories in an organization.
  • Enterprise-level runners can be assigned to multiple organizations in an enterprise account.

Installation of Self-hosted Runners

To set-up the self-hosted runner, an Action Runner Controller (ARC) and Runner scale sets application will be installed via helm. This post will be using Azure Kubernetes Service and ARC that is officialy maintained by Github. There is another ARC that is maintained by the community. You can follow the discussion where github adopted the ARC project into a full Github product here

  • Action Runner Controller (ARC): is a Kubernetes operator that orchestrates and scales self-hosted runners for GitHub Actions.
  • Runner scale sets: is a group of homogeneous runners that can be assigned jobs from GitHub Actions.

ARC Installation



# install ARC
NAMESPACE="arc-systems"
helm install arc \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller
    --version "0.8.1"


Enter fullscreen mode Exit fullscreen mode


#check your installation by running
helm list


Enter fullscreen mode Exit fullscreen mode


NAME  NAMESPACE   REVISION  UPDATED    STATUS   CHART
arc   arc-systems 1         2024-02-08 deployed gha-runner-scale-set-controller-0.8.1


Enter fullscreen mode Exit fullscreen mode


# check the installed manager pod by running
kubectl get pods -n arc-systems


Enter fullscreen mode Exit fullscreen mode

if the installation was succesful you should see 2 pods, a controller pod and a listener pod



NAME                          READY  STATUS
arc-gha-rs-controller-xxx-xx   1/1   Running
arc-runner-set-xxx-listener    1/1   Running


Enter fullscreen mode Exit fullscreen mode

Runner scale sets Installation
NOTE:

  • The INSTALLATION_NAME will be the name referenced as the value of runs-on in the workflow file.
  • The helm version used in the ARC installation above should be the same with the runner scale sets installation
  • To enable the runner to authenticate to github, a Github App or Personal Access Token (classic) is needed. For this post, a PAT is used. The PAT is scoped to the repository since the runners is installed at the repository level. You can change the scope to organisation or enterprise as you dim fit.
  • Kaniko is meant to be run as an image, to do this we will use the running a job within a container workflow. For this container workflow to work correctly, some modifications has to be done on the default gha-runner-scale-set values file. The type of the containerMode needs to be set to kubernetes. With this configuration, ARC uses runner container hooks to create a new pod in the same namespace to run the service, container job, or action.
  • Additionaly, kubernetes mode relies on persistent volumes (pv) to share job details between the runner pod and the container job pod. This means we need to have a solution that will dynamically provision a pv on demand for us. We can easily do this with a kubernetes StorageClass and Persistent Volume claim. All of this modification will be added to the gha-runner-scale-set value file.


# gha-runner-scale-set-value.yml

githubConfigUrl: "https://github.com/myorg/myrepo"
githubConfigSecret:
  github_token: "my-PAT"

## maxRunners is the max number of runners the autoscaling runner set will scale up to.
maxRunners: 5

## minRunners is the min number of idle runners. The target number of runners created will be
## calculated as a sum of minRunners and the number of jobs assigned to the scale set.
minRunners: 1

containerMode:
  type: "kubernetes"  ## type can be set to dind or kubernetes
  ## the following is required when containerMode.type=kubernetes
  kubernetesModeWorkVolumeClaim:
    accessModes: ["ReadWriteOnce"]
    # For local testing, use https://github.com/openebs/dynamic-localpv-provisioner/blob/develop/docs/quickstart.md to provide dynamic provision volume with storageClassName: openebs-hostpath
    storageClassName: "managed-csi" # for AKS
    resources:
      requests:
        storage: 2Gi

template:  
  spec:
    securityContext:
      fsGroup: 123 ## needed to resolve permission issues with mounted volume. https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors#error-access-to-the-path-homerunner_work_tool-is-denied
    containers:
      - name: runner
        image: ghcr.io/actions/actions-runner:latest
        command: ["/home/runner/run.sh"]
        env:
        - name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
          value: "false"  ## To allow jobs without a job container to run, set ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER to false on your runner container. This instructs the runner to disable this check.
    volumes:
      - name: work
        ephemeral:
          volumeClaimTemplate:
            spec:
              accessModes: [ "ReadWriteOnce" ]
              storageClassName: "managed-csi" # for AKS
              resources:
                requests:
                  storage: 2Gi



Enter fullscreen mode Exit fullscreen mode

This custom values file will be passed into the helm installation command for the runner scale set



INSTALLATION_NAME="arc-runner-set"
NAMESPACE="arc-runners"
helm install "${INSTALLATION_NAME}" \
    --namespace "${NAMESPACE}" \
    --create-namespace \
    -f ./gha-runner-scale-set-value.yml
    oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set
    --version "0.8.1"


Enter fullscreen mode Exit fullscreen mode


# check the installed runner pod by running
kubectl get pods -n arc-runners


Enter fullscreen mode Exit fullscreen mode


NAME                          READY  STATUS
arc-runner-set-xx-runner-xx   1/1   Running


Enter fullscreen mode Exit fullscreen mode

Build Workflow
The workflow will build and push the image to Azure Container Registry, Github Container Registry and docker hub. The dockerfile is shown below



FROM nginx:latest

COPY app/index.html /usr/share/nginx/html


Enter fullscreen mode Exit fullscreen mode


# build-with-kaniko.yml

name: Build with kaniko

on:
  push:
    branches: [ "*" ]
    paths:
      - "app/**"
      - ".github/workflows/build-with-kaniko.yml"

  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:

env:
  KANIKO_CACHE_ARGS: "--cache=true --cache-copy-layers=true --cache-ttl=24h"

jobs:
  build-to-ghcr:
    runs-on: arc-runner-set # uses self-hosted runner scale set
    container:
      image: gcr.io/kaniko-project/executor:v1.20.0-debug # the kaniko image
    permissions:
      contents: read # read the repository
      packages: write # to push to GHCR, omit for other container registry. https://docs.github.com/en/packages/managing-github-packages-using-github-actions-workflows/publishing-and-installing-a-package-with-github-actions#publishing-a-package-using-an-action

    steps:
      - name: Build and Push Image to GHCR with kaniko
        run: |
          cat <<EOF > /kaniko/.docker/config.json
          {
            "auths": {
              "ghcr.io": {
                "auth": "$(echo -n "$GIT_USERNAME:$GIT_PASSWORD" | base64 -w0)"
              }
            }
          }
          EOF

          /kaniko/executor --dockerfile="./app/Dockerfile" \
            --context="${{ github.repositoryUrl }}#${{ github.ref }}#${{ github.sha }}"  \
            --destination="$GH_REGISTRY/$IMAGE_NAME:$(echo ${GITHUB_SHA} | head  -c 7)" \
            ${{ env.KANIKO_CACHE_ARGS }} \
            --push-retry 5 
        env: # needed to authenticate to github and download the repo
          GIT_USERNAME: ${{ github.actor }} 
          GIT_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
          GH_REGISTRY: "ghcr.io"
          IMAGE_NAME: "${{ github.repository }}/nginx"

  build-to-acr:
    runs-on: arc-runner-set # uses self-hosted runner scale set
    container:
      image: gcr.io/kaniko-project/executor:v1.20.0-debug # the kaniko image
    permissions:
      contents: read # read the repository

    steps:
      - name: Build and Push Image to ACR with kaniko
        run: |
          cat <<EOF > /kaniko/.docker/config.json
          { "credHelpers": { "${{ env.ACR_URL }}": "acr-env" } }
          EOF

          /kaniko/executor --dockerfile="./app/Dockerfile" \
            --context="${{ github.repositoryUrl }}#${{ github.ref }}#${{ github.sha }}"  \
            --destination="$ACR_URL/<namespace>/nginx:$(echo ${GITHUB_SHA} | head  -c 7)" \
            ${{ env.KANIKO_CACHE_ARGS }} \
            --push-retry 5 
        env:  # needed to auth to github and download the repo and to authenticate to ACR via Azure Service Principal
          AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
          AZURE_CLIENT_SECRET: ${{ secrets.AZURE_CLIENT_SECRET }}
          AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
          GIT_USERNAME: ${{ github.actor }}
          GIT_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
          ACR_URL: "myacr.azurecr.io"

  build-to-docker-hub:
    runs-on: arc-runner-set # uses self-hosted runner scale set
    container:
      image: gcr.io/kaniko-project/executor:v1.20.0-debug 
    permissions:
      contents: read # read the repository

    steps:
      - name: Build and Push Image to docker registry with kaniko
        run: |
          cat <<EOF > /kaniko/.docker/config.json
          {
            "auths": {
              "https://index.docker.io/v1/": {
                "auth": "$(echo -n "${{ secrets.DOCKER_USERNAME }}:${{ secrets.DOCKER_PASSWORD }}" | base64 )"
              }
            }
          }
          EOF

          /kaniko/executor --dockerfile="./app/Dockerfile" \
            --context="${{ github.repositoryUrl }}#${{ github.ref }}#${{ github.sha }}"  \
            --destination="$DOCKER_IMAGE_NAME:$(echo ${GITHUB_SHA} | head  -c 7)" \
            ${{ env.KANIKO_CACHE_ARGS }} \
            --push-retry 5 
        env: 
          GIT_USERNAME: ${{ github.actor }}
          GIT_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
          DOCKER_IMAGE_NAME: "<docker-username>/nginx"


Enter fullscreen mode Exit fullscreen mode

Workflow Build

Image description

Kaniko Args

  • --dockerfile: Path to the dockerfile to be built. (default "Dockerfile")
  • --context: specify the location of your build context, in this case the github repo
  • --destination: the container registry to push to
  • --cache: to opt into caching with kaniko
  • --cache-copy-layers: Set this flag to cache copy layers.
  • --cache-ttl: Cache timeout in hours. Defaults to two weeks.

Useful Links:

Top comments (1)

Collapse
 
stepankuksenko profile image
Stepan Kuksenko • Edited

the fsGroup should be not 123 but the same as the group of the user in the runner container which is 1001

runner@k8s-default-29gz4-runner-lttpm:~/_work$ cat /etc/passwd | grep runner
runner:x:1001:1001:,,,:/home/runner:/bin/bash
Enter fullscreen mode Exit fullscreen mode