DEV Community

Raphael Habereder
Raphael Habereder

Posted on • Updated on

How to build a basic Docker CI/CD Pipeline with Jenkins

In the last post we talked about the Dos and Don'ts of containers.

We established the following example hierarchy of images:

minimal-baseimage (for example ubuntu, alpine, centos)
|__ nginx-baseimage
|  |__ awesome-website-container
|__ quarkus-baseimage
|  |__ awesome-java-microservice
|__ python-baseimage
   |__ awesome-flask-webapp
Enter fullscreen mode Exit fullscreen mode

Maybe you dabbled a bit and built a few containers.
A few weeks pass.
You are sick of updating everything yourself.
This needs to be automated in some way, you want to focus on the real thing, your awesome apps!

So let's build a simple pipeline with Jenkins that takes care of patching your Images, and later on, maybe even deploy them for you! Automagically.


Install jenkins with your favorite package manager:


wget -q -O - | sudo apt-key add -
sudo sh -c 'echo deb binary/ > \
sudo apt-get update
sudo apt-get install jenkins openjdk-11-jdk-headless
Enter fullscreen mode Exit fullscreen mode

Or if you are on MacOS:

brew cask install homebrew/cask-versions/adoptopenjdk8
brew install jenkins-lts
brew services start jenkins-lts
Enter fullscreen mode Exit fullscreen mode

If you want to run Jenkins on docker, it's going to be a bit more complicated, but it's doable. It just hastened my aging process by about 200 years to make this work on MacOS too.

FROM alpine

USER root
RUN apk add --no-cache \
  bash \
  coreutils \
  curl \
  git \
  git-lfs \
  openssh-client \
  tini \
  ttf-dejavu \
  tzdata \
  unzip \
  openjdk11-jdk \
  shadow \ 

# Install Gosu
RUN set -eux; \
    apk add --no-cache --virtual .gosu-deps \
        ca-certificates \
        dpkg \
        gnupg \
    ; \
    dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
    wget -O /usr/local/bin/gosu "$GOSU_VERSION/gosu-$dpkgArch"; \
    wget -O /usr/local/bin/gosu.asc "$GOSU_VERSION/gosu-$dpkgArch.asc"; \
# verify the signature
    export GNUPGHOME="$(mktemp -d)"; \
    gpg --batch --keyserver hkps:// --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
    gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
    command -v gpgconf && gpgconf --kill all || :; \
    rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
# clean up fetch dependencies
    apk del --no-network .gosu-deps; \
    chmod +x /usr/local/bin/gosu; \
# verify that the binary works
    gosu --version; \
    gosu nobody true

ARG user=jenkins
ARG group=jenkins
ARG uid=1001
ARG gid=1001
ARG http_port=8080
ARG agent_port=50000
ARG JENKINS_HOME=/var/jenkins_home
ARG REF=/usr/share/jenkins/ref


# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN mkdir -p $JENKINS_HOME \
  && chown ${uid}:${gid} $JENKINS_HOME \
  && addgroup -g ${gid} ${group} \
  && adduser -h "$JENKINS_HOME" -u ${uid} -G ${group} -s /bin/bash -D ${user} 

# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades

# $REF (defaults to `/usr/share/jenkins/ref/`) contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p ${REF}/init.groovy.d

# jenkins version being bundled in this docker image

# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=6c95721b90272949ed8802cab8a84d7429306f72b180c5babc33f5b073e1c47c

# Can be used to customize where jenkins.war gets downloaded from

# Could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# See
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
  && echo "${JENKINS_SHA}  /usr/share/jenkins/jenkins.war" | sha256sum -c -

RUN chown -R ${user} "$JENKINS_HOME" "$REF"

# For main web interface:
EXPOSE ${http_port}

# Will be used by attached agents:
EXPOSE ${agent_port}


# Download and place scripts needed to run
RUN curl -o /usr/local/bin/jenkins-support && \
    curl -o /usr/local/bin/ && \
    curl -o /bin/tini && \
    curl -o /usr/local/bin/ && \
    curl -o /usr/local/bin/

COPY --chown=${user} /

RUN chmod +x /usr/local/bin/ /usr/local/bin/ /usr/local/bin/ /bin/tini /usr/local/bin/jenkins-support
RUN chmod +x /

# Stay root, the entrypoint drops down to User jenkins via gosu
Enter fullscreen mode Exit fullscreen mode


# Stolen from: Brandon Mitchell <>
# License: MIT
# Source Repo:

set -x

# configure script to call original entrypoint
set -- tini -- /usr/local/bin/ "$@"

# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host.
if [ "$(id -u)" = "0" ]; then
  # get gid of docker socket file
  SOCK_DOCKER_GID=`ls -ng /var/run/docker.sock | cut -f3 -d' '`

  # get group of docker inside container
  CUR_DOCKER_GID=`getent group docker | cut -f3 -d: || true`

  # if they don't match, adjust
  if [ ! -z "$SOCK_DOCKER_GID" -a "$SOCK_DOCKER_GID" != "$CUR_DOCKER_GID" ]; then
    groupmod -g ${SOCK_DOCKER_GID} -o docker
  if ! groups jenkins | grep -q docker; then
    usermod -aG docker jenkins

  #If you run on MacOS
  if ! groups jenkins | grep -q staff; then
    usermod -aG staff jenkins
  # Add call to gosu to drop from root user to jenkins user
  # when running original entrypoint
  set -- gosu jenkins "$@"

# replace the current pid 1 with original entrypoint
exec "$@"
Enter fullscreen mode Exit fullscreen mode

Build and run our Jenkins image:

# We need to create a directory for Jenkins to save his data to
# Since to container runs with UID:GID 1001:1001
# The folder also needs to get correct permissions set
mkdir $HOME/jenkins && chown 1001:1001 $HOME/jenkins

docker build -t myjenkins .
docker run -d \
           -v $HOME/jenkins:/var/jenkins_home \ 
           -v /var/run/docker.sock:/var/run/docker.sock \
           -p 8080:8080 \
           --name jenkins \
           --restart unless-stopped \
Enter fullscreen mode Exit fullscreen mode

If you run a different system (sorry, I can't provide them all for you, it would take me days :( ), there is probably a guide for you out there, just as simple as these few lines.

Open the Jenkins UI in your awesome browser of choice and enter the password you can find in the location that jenkins tells you.

If it's not there, these places are usually a safe bet:



docker logs jenkins
Enter fullscreen mode Exit fullscreen mode

Hammer it in and go on to install the suggested plugins. Depending on your machine, it's now your final chance to grab a cup of coffee, before we dive in.

Plugins, plugins, plugins

Next, we need some awesome plugins.
Just go via the Jenkins GUI -> Manage Jenkins -> Manage Plugins
Select the tab "Available", put Docker into the filter in the upper right corner and select the following Plugins:

  • Docker Commons
  • Docker Pipeline
  • Docker API
  • Docker
  • docker-build-step

Install without restart and wait a bit.
These will set you up fine for your first simple docker Pipelines.

Now we have to configure jenkins to find the docker-runtime to build our images with.

This can be done here. After installing the plugins, you get the new section "Docker", where you can Add Docker Installations. So go ahead and push that button.
Give it a name, I chose "Docker CE 19.03" and leave the installation root empty. Jenkins should find docker on the $PATH itself.


On to the next step, let's create a pipeline.

Via Jenkins -> New Item you'll get to a page that will let you specify which kind of item you want to create. Select Pipeline, give the puppy a nice name and hit OK.

Scroll down until you see this:
Jenkins Pipeline Input

Now let's get this show on the road!

node {   
    stage('Clone repository') {
        // Missing Credentials can be added via UI 
        // Look at the bottom of the box for a link called "Pipeline-Syntax"
        // If you don't have much Jenkins experience, 
        // there you can generate pipelines with a few Dropdowns and Textboxes
        git credentialsId: 'git', url: '<your git url>'

    stage('Build image') {
        // If you have multiple Dockerfiles in your Project, use this:
        // app ="my-ubuntu-base", "-f Dockerfile.base .")

        app ="my-ubuntu-base")

    stage('Test image') {
        app.inside {
            sh 'echo "Tests passed"'

    stage('Push image') {
        docker.withRegistry('http://registry.local:5000') {
Enter fullscreen mode Exit fullscreen mode

This file can be copied easily, since you don't have to change a lot. If you want to go the extra mile, make it a parameterized job and put the variables in there, to be filled via REST for example.

You might have noticed it, but we push to a registry called "registry.local:5000".

If you don't want to push your images into dockerhub right away, or have no other registry of your own, we can fire one up real quick.

docker run -d --name --restart always registry.local -p 5000:5000 registry:2
Enter fullscreen mode Exit fullscreen mode

To use this registry with a nice dns-name, just run this:

echo ' registry.local' | sudo tee -a /etc/hosts
Enter fullscreen mode Exit fullscreen mode

To make sure the registry works, you have to tell docker to allow it, as an "insecure registry".

Linux: echo '{ "insecure-registries": ["registry.local:5000"] }' | sudo tee -a /etc/docker/daemon.json
Enter fullscreen mode Exit fullscreen mode

On Desktops you can add this via the Docker Preferences UI

To get back to our pipelines, if you remember our imaginary Image-Hierarchy, we would need 3 jobs:

|__ nginx-baseimage
   |__ awesome-website-container
Enter fullscreen mode Exit fullscreen mode

Go ahead and copy/hack away, I'll wait for you.

That was easy, right?

Automate it

Now let's be honest, nobody likes pushing build buttons regularly, so let's automate this.

Minimal Base Build-Schedule

Let's go to your minimal-baseimage job, the first in the hierarchy, which provides the minimal, but regularly patched, base-system for our infrastructure/middleware containers.

Look for the following setting and schedule the job to run regularly, for example to run daily at 8:00 in the morning:

Set up Schedules for your Minimal-Base-Images

A base-image probably isn't going to be patched every other minute, so a daily, or even weekly schedule would be just fine.

Update derived containers automatically

Now how do we get our derived images to build as well, once the minimal base-image is updated?

Like this, for example:

Set up a Dependency to your minimal base-images

That setting would take care of updating our NGinX base-image, once the minimal-baseimage has been patched. Obviously only, if the build actually succeeds.

Create Deployables as often as possible

For images that contain actual source, which would be our deployables, those should get built pretty frequently.
We don't only want builds when the baseimage gets patched, but also when our codebase changes. So let's implement that.

Setup dependency to infrastructure images

here we have two triggers, the first makes sure our app gets updated once the patches ran down the chain and arrived on our nginx container.

Additionally, we poll our git, to trigger a build for incoming git commits.
This, as is usually the case with polling, works in the beginning, for small teams that don't push dozens of builds in 5 minutes. Depending on your circumstances this could already suffice.

Teams with a high push frequency will probably end up with a build containing multiple commits, which is probabyly undesired for eventual testing stages (or the blame-game if the build breaks :P).

If you want a build for every push, reliably, you will have to look at your git repo-tool and check if your tool maybe provides post-push webhooks.

The set up Jenkins for that, the config could look like this:

Set up a Webhook for reliable push-based builds

This allows your job to be called via REST, if the specified token is provided.

Maybe there even is a cool plugin for your existing toolset, it would actually shock me if there wasn't. Jenkins existing Plugin-Base is enormous, there are plugins for pretty much anything.


After a short while, your Jenkins Host could look like this:

Regularly clean your Dockercache on your Build hosts

You should probably think about regularly cleaning up your Jenkins-Host via docker system prune -af to save space.

So we will do just that with another job:

node {   
    sh "docker system prune -af"
Enter fullscreen mode Exit fullscreen mode

Add your schedule to run daily, just like you did before, and you are set!

Finishing Line

Congratulations, you have a completely independent build-pipeline for your images now. If you trigger the minimal-baseimage by hand, or they are triggered by their daily schedule, Jenkins should walk it's way all the way down to your awesome website-container and everything should be patched, pushed and ready to use!

Next time we can take a look at kubernetes and how we can implement a CI with tekton instead of Jenkins, if there is interest for the topic.
Or we could go and scan our images with anchore for CVEs.

Feel free to wish for something that interests you :)

Top comments (1)

sandeepvarmabh profile image

Hi Sir

thanks for your valuable info. We are trying to implement your code in ECS.

How can we define this in docker file to build the image. Since ECS only considers the image we need to define the below fix in Docker file and build image and RUN the container.

mkdir $HOME/jenkins && chown 1001:1001 $HOME/jenkins