Cover image for You Rang, M'Lord? Docker in Docker with Jenkins declarative pipelines

You Rang, M'Lord? Docker in Docker with Jenkins declarative pipelines

tapaibalazs profile image Tápai Balázs Originally published at tapaibalazs.netlify.com ・7 min read

Resources. When unlimited they are not important, but when limited, boy you have challenges! Recently we've faced such a challenge ourselves when
we realised that we need to upgrade the node version on one of our Jenkins agent so we could build and properly test our Angular 7 app. However, we would also lose
the ability to build our legacy AngularJS apps which require Node 8. What to do?

The butler did it!

Apart from eliminating the famous "It works on my machine" problem, docker comes in handy when such a problem arises. However,
there are certain challenges that need to be addressed, such as docker in docker. For this purpose, after a long period of trial and error, we have
built and published a docker file for my team's necessities,
which can help running our builds, which looks like the following:

1. Install dependencies
2. Lint the code
3. Run unit tests
4. Run SonarQube analysis
5. Build the application
6. Build a docker image which would be deployed
7. Run the docker container
8. Run cypress tests
9. Push docker image to the repository
10. Run another Jenkins job to deploy it to the environment
11. Generate unit and functional test reports and publish them
12. Stop any running containers
13. Notify chat/email about the build

The docker image of our needs

Our project is an Angular 7 project, which was generated using the angular-cli. We also have some dependencies that need node 10.x.x. We lint our code with tslint, and run our unit tests
with Karma and Jasmine. For the unit tests we need a Chrome browser installed, so they can run with headless chrome.
This is why we decided to use the cypress/browsers:node10.16.0-chrome77 image. After we install the dependencies, lint our code and run our unit tests, we run the SonarQube analysis.
This requires us to have Openjdk 8 as well.

FROM cypress/browsers:node10.16.0-chrome77

# Install OpenJDK-8
RUN apt-get update && \
    apt-get install -y openjdk-8-jdk && \
    apt-get install -y ant && \
    apt-get clean;

# Fix certificate issues
RUN apt-get update && \
    apt-get install ca-certificates-java && \
    apt-get clean && \
    update-ca-certificates -f;

# Setup JAVA_HOME -- useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/

The sonar scan is ready, therefore, we build the application. One of the strongest principles in testing is that one should test the thing that will be used by the users.
That is the reason that we want to test the built code in exactly the same docker container as it would be in production. we could, of course serve the front-end from a very simple nodejs static server,
but that would mean that everything that an Apache HTTP server or an NGINX server do would be missing. For example all the proxies, gzip or brotli would be missing.

Now while it is a strong principle, the biggest problem of this is that we are already running inside a docker container. That is why we need DIND (Docker in Docker). After spending a whole day
with my colleague researching, we found a solution which works like a charm now. The first and most important thing is that our build container needs the docker executable.

# Install Docker executable
RUN apt-get update && apt-get install -y \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg2 \
        software-properties-common \
    && curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
    && add-apt-repository \
        "deb [arch=amd64] https://download.docker.com/linux/debian \
        $(lsb_release -cs) \
        stable" \
    && apt-get update \
    && apt-get install -y \

RUN usermod -u 1002 node && groupmod -g 1002 node && gpasswd -a node docker

As you can see we install the docker executable and the necessary certificates, but we also add the rights and groups for our user. The latter one is necessary because the host machine, our Jenkins agent, starts the container with -u 1002:1002. That is the user ID of our Jenkins agent which runs the container unprivileged.

This is of course not everything. When the container starts, the docker daemon of the host machine must be mounted. This requires us to start the build container
with some extra parameters. It looks like the following in a Jenkinsfile:

pipeline {
  agent {
    docker {
     image 'btapai/pipelines:node-10.16.0-chrome77-openjdk8-CETtime-dind'
     label 'frontend'
     args '-v /var/run/docker.sock:/var/run/docker.sock -v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket -e HOME=${workspace} --group-add docker'

// ...

As you can see, we mount two Unix sockets. /var/run/docker.sock mounts the docker daemon to the build container.

/var/run/dbus/system_bus_socket is a socket that is needed for cypress to be able to run inside our container.

-e HOME=${workspace} is needed for avoiding access right issues during the build.

--group-add docker passes the host machines docker group down, so, inside the container, our user can use the docker daemon.

With these proper arguments, we can build our image, start it up and run our cypress tests against it. But let's take a deep breath here. In Jenkins, we want to use multi-branch pipelines.
Multibranch pipelines in Jenkins will create a Jenkins job for each branch that contains a Jenkinsfile. This means that when we develop multiple branches they are going to have their own views.

There are some problems with this. The first problem is that if we build our image with the same name in all the branches, it will conflict, since our docker daemon is technically not inside our build container.
The second problem will arise when the docker run command uses the same port in every build because you can't start the second container on a port that is already taken.
The third issue is getting the proper URL for the running application, because Dorothy, you are not in Localhost anymore.

Let's start with the naming. Getting a unique name is pretty easy with git, because commit hashes are unique. However, getting a unique port we might need to use a little trick when we declare our environment variables:

pipeline {

// ..

  environment {
    BUILD_PORT = sh(
        script: 'shuf -i 2000-65000 -n 1',
        returnStdout: true

// ...

    stage('Functional Tests') {
      steps {
        sh "docker run -d -p ${BUILD_PORT}:80 --name ${GIT_COMMIT} application"
        // be patient, we are going to get the url as well. :)

// ...


With the shuf -i 2000-65000 -n 1 command on certain Linux distributions you can generate a random number. Our base image uses Debian so we were lucky here.
the GIT_COMMIT environment variable is provided in Jenkins via the SCM plugin.

Now comes the hard part, namely, we are inside a docker container and there is no localhost and the network inside docker containers can change.
It is also a funny thing that when we start our container, it runs on the host machine's docker daemon, so technically it is not running inside our container. We have to reach it from the inside.
After several hours of investigation my colleague found a possible solution:
docker inspect --format "{{ .NetworkSettings.IPAddress }}"

Which did not work, because that IP address is not an IP address inside the container but outside. Then we tried the NetworkSettings.Gateway property, which worked like a charm.
So our Functional testing stage looks like the following:

stage('Functional Tests') {
  steps {
    sh "docker run -d -p ${BUILD_PORT}:80 --name ${GIT_COMMIT} application"
    sh 'npm run cypress:run -- --config baseUrl=http://`docker inspect --format "{{ .NetworkSettings.Gateway }}" "${GIT_COMMIT}"`:${BUILD_PORT}'

It was a wonderful feeling to see our cypress tests running inside a docker container. Then some of them failed miserably.
The reason was, that the failing cypress tests expected to see some dates.

  .and("contain", "2019.12.24 12:33:17")

But because of our build container was set to a different timezone, the displayed date on our front-end was different. It is an easy fix, my colleague has seen it before. We install the necessary time zones and locales. In our case we set the build container's timezone to Europe/Budapest, because our tests were written in this timezone.

RUN apt-get update \
    && apt-get install --assume-yes --no-install-recommends locales \
    && apt-get clean \
    && sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen \
    && sed -i -e 's/# hu_HU.UTF-8 UTF-8/hu_HU.UTF-8 UTF-8/' /etc/locale.gen \
    && locale-gen

ENV LANG="en_US.UTF-8" \
    LC_CTYPE="en_US.UTF-8" \
    LC_NUMERIC="hu_HU.UTF-8" \
    LC_TIME="hu_HU.UTF-8" \
    LC_COLLATE="en_US.UTF-8" \
    LC_MONETARY="hu_HU.UTF-8" \
    LC_MESSAGES="en_US.UTF-8" \
    LC_PAPER="hu_HU.UTF-8" \
    LC_NAME="hu_HU.UTF-8" \
    LC_ADDRESS="hu_HU.UTF-8" \
    LC_TELEPHONE="hu_HU.UTF-8" \

RUN apt-get update \
    && apt-get install --assume-yes --no-install-recommends tzdata \
    && apt-get clean \
    && echo 'Europe/Budapest' > /etc/timezone && rm /etc/localtime \
    && ln -snf /usr/share/zoneinfo/'Europe/Budapest' /etc/localtime \
    && dpkg-reconfigure -f noninteractive tzdata

Since every crucial part of the build is now resolved, pushing the built image to the registry is just a docker push command. You can check the whole dockerfile here.
One thing remains, which is to stop running containers when the cypress tests fail. This can be done easily using the always post step.

post {
  always {
    script {
      try {
        sh "docker stop ${GIT_COMMIT} && docker rm ${GIT_COMMIT}"
      } catch (Exception e) {
        echo 'No docker containers were running'

Thank you very much for reading this blog post. I hope it helps you.

The original article can be read on my blog:

Posted on by:

tapaibalazs profile

Tápai Balázs


I believe in quality software development. I make software to make people's lives better. A well written and well-tested codebase helps future-you and other developers who come after you. :)


Editor guide

Update: The reason why the IPAddress property does not work and the Gateway does is that docker creates a bridge network when the build container starts. In such setup, every request goes through the gateway which uses NAT rules to forward the request to the proper container. So when we start another container inside the build container, we have to send requests to the gateway address because that handles the forwarding. Pretty neat.