DEV Community

Cover image for Run CI/CD pipelines behind your firewall with the Codefresh runner
Kostis Kapelonis
Kostis Kapelonis

Posted on • Updated on

Run CI/CD pipelines behind your firewall with the Codefresh runner

Continuous Integration/Delivery (CI/CD) is one of the most obvious candidates for moving to a Kubernetes cluster, as you automatically enjoy all the benefits of Kubernetes scalability. In traditional CI solutions, companies employ a fixed set of build nodes that teams must manually monitor and upgrade.

By moving your CI/CD pipelines to Kubernetes and containers you gain two important advantages:

  • You can now dynamically manage your build tools (i.e. Node, Python, Maven) in Docker images instead of installing them by hand on build nodes.
  • Your builds scale automatically as you run more pipelines without any manual configuration. Kubernetes autoscaling is working to your advantage, dynamically allocating resources according to build usage.

While most organizations are employing Kubernetes clusters only in order to deploy their production workloads, Kubernetes clusters are actually ideal for your “supporting” services such as artifact management, continuous integration, security scanning, etc. In fact, if you haven’t switched to Kubernetes and microservices already, using a cluster for supporting the software lifecycle might prove valuable if you don’t want to tamper with your production applications that are still running in Virtual Machines.

There are many solutions for running pipelines in a Kubernetes cluster. In this article, we will see how you can use the open-source Codefresh Runner.

How does the runner work

The Codefresh Runner can be found in Github at It is part of the Codefresh CLI (that includes additional capabilities for managing existing pipelines and builds).

The runner is a native Kubernetes application. You install it on your cluster and then it takes care of all aspects of pipeline launching, running, cleaning up, etc. If you have auto-scaling enabled on your Kubernetes the runner will automatically scale as your run more pipelines.

You can install the runner on any compliant Kubernetes distribution. The cluster can be public or private (even behind a firewall). In fact, the Codefresh Runner can even be installed on your local Kubernetes cluster if you have one on your workstation. For example, you can easily install the Codefresh runner on your Docker-for-Deskop Kubernetes distribution for a quick demo.

It is important to mention that the Codefresh runner does not need any incoming traffic (it only fetches build information). This means that you don’t need to open any firewall ports or tamper with your NAT settings if you choose to install the Runner on a private Kubernetes cluster.

For more information on the Codefresh Runner see its documentation page.


To install the Runner and start running pipelines on your cluster you will need:

  • A terminal with kubectl access to your cluster. You can use any of the popular cloud solutions such as Google, Azure, AWS, Digital Ocean, etc, or even a local cluster such as Microk8s, Minikube, K3s, etc.
  • A free account with Codefresh in order to access the management UI for the Runner.

For the installation, you can also use the “cloud console” of your cloud provider. If you run any kubectl command (such as kubectl get nodes) and get a valid response, then you are good to go.

Download/Install the Codefresh CLI and authenticate it. You can create an API token from your Codefresh account by visiting

Api Token

Once you have the token you can use it on the command line:

codefresh auth create-context --api-key {API_KEY}
Enter fullscreen mode Exit fullscreen mode

That’s it for the authentication portion.

Quick start with the installation wizard

The Codefresh Runner has multiple installation methods, but the simplest one is by using the command-line wizard. To start the wizard execute:

codefresh runner init
Enter fullscreen mode Exit fullscreen mode

You will be asked a series of questions (where in most cases you can simply accept the defaults).

Installation wizard

The installation wizard will then take care of everything. It will install the runner components in your cluster and setup authentication with the Codefresh UI.

Verifying the installation of the runner

By default, the quick start wizard will also create a sample pipeline and provide the URL to visit to see it running in the Codefresh UI.

Demo pipeline

If everything works ok, you will see the message “hello codefresh runner” in the build console.

You can also inspect the status of the runner components using standard Kubernetes tools. By default, all components of the runner are in the “codefresh” namespace.

kostis@ubuntu18-desktop:~$ kubectl get pods -n codefresh
NAME                                              READY   STATUS    RESTARTS   AGE
dind-5ef0a18bd2f8f459f2d32c78                     1/1     Running   0          37s
dind-lv-monitor-runner-7pnf2                      1/1     Running   0          4d22h
dind-lv-monitor-runner-lf746                      1/1     Running   0          4d22h
dind-lv-monitor-runner-xc8lp                      1/1     Running   0          4d22h
dind-volume-provisioner-runner-64994bbb84-fsr6x   1/1     Running   0          4d22h
engine-5ef0a18bd2f8f459f2d32c78                   1/1     Running   0          37s
monitor-697dd5db6f-72s6g                          1/1     Running   0          4d22h
runner-5d549f8bc5-pf9mw                           1/1     Running   0          4d22h
Enter fullscreen mode Exit fullscreen mode

Some of the components are permanent and some will be launched dynamically only when pipelines are actually running.

Creating your own pipeline

With the demo pipeline running up and running, it is now time to create your own pipeline. Pipelines are described in YAML and are composed of different steps. You can edit the YAML definition directly in the Codefresh UI, or just store it in a Git repository (the same one that holds your application or a different one).

Here is a very simple pipeline with 3 steps:

  1. First, we check out the code with a clone step
  2. Then we build a Docker image and push it to a registry
  3. Finally, we deploy it to a cluster using Helm

Kubernetes deployment pipeline

And here is the pipeline definition:

version: '1.0'
  - prepare
  - build
  - deploy
    title: Cloning main repository...
    stage: prepare
    type: git-clone
      repo: codefresh-contrib/helm-sample-app
      revision: master
      git: github
    title: Building Docker Image
    stage: build
    type: build
    working_directory: ./helm-sample-app
      image_name: helm-sample-app-go
      tag: multi-stage
      dockerfile: Dockerfile
    title: Deploying Helm Chart
    type: helm
    stage: deploy
    working_directory: ./helm-sample-app
      action: install
      chart_name: charts/helm-example
      release_name: my-go-chart-prod
      helm_version: 3.0.2
      kube_context: my-demo-k8s-cluster
        - 'buildID=${{CF_BUILD_ID}}'
        - 'image_pullPolicy=Always'
        - 'image_tag=multi-stage'
        - 'replicaCount=3'
        - ''
Enter fullscreen mode Exit fullscreen mode

You can find a more detailed explanation in the Helm example.

You should also check your pipeline settings and make sure that it is assigned to the cluster that has your runner (because it is possible to have multiple runners in multiple clusters).

Assign pipeline to specific cluster

This means that you can assign a specific pipeline to use a specific runner.

Connecting to your private services

One of the advantages of the Codefresh runner is easy access to your internal services without compromising security. The runner can connect to Git repositories, Docker registries, Kubernetes clusters, and other resources that are also behind the firewall.

To enable these services to be used in pipelines you need to visit the integrations screen at

External integrations

Here you can centrally set up the configuration of each resource. Each resource will then get a unique name/identifier that you can use in your YAML pipelines. For example, you can add multiple internal/external registries:

External registries

In the example above, I have 3 registries connected to the runner. Now I can simply mention them in the pipeline by name.

version: '1.0'
    title: Building My Docker image
    type: build
    image_name: my-app-image
    dockerfile: my-custom.Dockerfile
    tag: 1.0.1
    registry: dockerhub
Enter fullscreen mode Exit fullscreen mode

This build step will build a custom Dockerfile, tag the image as my-app-image:1.0.1, and then push the image to Dockerhub.

Notice the complete lack of Docker login/tag/push commands. They are all abstracted away and your pipeline is as simple as possible.

You can follow the same pattern with the other integrations (e.g. Kubernetes clusters and Helm charts). Notice that the Kubernetes cluster that is hosting the runner is also available by name so you can deploy applications to it in a declarative way.

For more information on YAML pipelines check the example directory. You can also write your own pipeline steps using plain Docker images.


Cover photo by Unsplash at

Top comments (0)