DEV Community

Raphael Habereder
Raphael Habereder

Posted on • Updated on

How to set up Gitlab to trigger Jenkins on push

Last time we built a simple Jenkins CI Pipeline with polling, so our images are always up to date when we push changes to our codebase.

I don't like using polling for this, and you shouldn't either. The Polling approach is either unreliable if the timing is too slow and incluces too many commits in a single image, or produces unnecessary load if you let it poll every second.

Imagine your Boss going "hey, is the feature done? How about now? It's been 5 seconds, anything new?".
A) it's annoying and B) you won't get anything done that way, right?

Rather you would prefer to inform your Boss "hey, the feature is done, take a look!" and go on to do something new, right?

So let's implement this and notify our CI-Server whenever a commit is pushed.
Don't feel left out if you don't use Gitlab or Jenkins in your projects, most git/ci toolsets offer a feature of this kind.

Setup

This is our to-do-list:

  • Make DNS-Names for container-communication work, because looking up IPs of containers is annoying
  • Boot a Jenkins Instance and create a Job we can trigger via Webhooks
  • Boot a Gitlab Instance and create a repo with a demo Dockerfile and Webhook

So let's get crackin!

DNS-Resolution

I could go on a tangent and tell you the grand story, but in short, the default bridge network does no dns resolution because of reasons. If you want dns-resolution for containers by their name, you need to create a custom bridge network. Which thankfully, is very easy to do!



$ docker network create dns-bridge
$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
9508cfea746d        bridge              bridge              local
67aa52615190        dns-bridge          bridge              local
ae9791d28429        host                host                local
d2c62d84bc06        none                null                local


Enter fullscreen mode Exit fullscreen mode

That's it, now you have a user-defined bridge that automatically does dns resolution. Let's test it:



docker run -d --name busybox1 --network dns-bridge busybox:1.28 sleep 3600
docker run -d --name busybox2 --network dns-bridge busybox:1.28 sleep 3600

$ docker exec -ti busybox1 ping busybox2
PING busybox2 (172.20.1.2): 56 data bytes
64 bytes from 172.20.1.2: seq=0 ttl=64 time=0.071 ms
64 bytes from 172.20.1.2: seq=1 ttl=64 time=0.057 ms

And the reverse:
$ docker exec -ti busybox1 ping busybox2
PING busybox2 (172.20.1.2): 56 data bytes
64 bytes from 172.20.1.2: seq=0 ttl=64 time=0.071 ms
64 bytes from 172.20.1.2: seq=1 ttl=64 time=0.057 ms


Enter fullscreen mode Exit fullscreen mode

That's looking pretty good!

If you want to connect/disconnect an already running container to a network:



$ docker run -d --name busybox-ext busybox:1.28 sleep 3600
2efe297352a6ff1e9876a46c853b786a415480235d05df9451672689d580ad6e

$ docker network connect dns-bridge busybox-ext
$ docker network inspect dns-bridge -f "{{range .Containers}}{{println .Name}}{{end}}"
busybox-ext
busybox2
busybox1
$ docker network disconnect dns-bridge busybox-ext
$ docker network inspect dns-bridge -f "{{range .Containers}}{{println .Name}}{{end}}"
busybox2
busybox1


Enter fullscreen mode Exit fullscreen mode

Alright, let's go on to setup our Jenkins instance

Jenkins

For this post I built a custom Docker-In-Docker Jenkins based on Alpine, which I will tell you, was a horrible experience.
I'm not happy about this dockerfile, so don't crucify me please.
While I could put it up on github or something, I don't want you to clone a random repo and just run it without having at least taken a look at it. So I am going to be the one bad teacher nobody likes and let you copy the two files you need from here :)



FROM alpine

USER root
RUN apk add --no-cache \
  bash \
  coreutils \
  curl \
  git \
  git-lfs \
  openssh-client \
  tini \
  ttf-dejavu \
  tzdata \
  unzip \
  openjdk11-jdk \
  shadow \ 
  docker 

# Install Gosu
ENV GOSU_VERSION 1.12
RUN set -eux; \
    \
    apk add --no-cache --virtual .gosu-deps \
        ca-certificates \
        dpkg \
        gnupg \
    ; \
    \
    dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
    wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
    wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
    \
# verify the signature
    export GNUPGHOME="$(mktemp -d)"; \
    gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
    gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
    command -v gpgconf && gpgconf --kill all || :; \
    rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
    \
# clean up fetch dependencies
    apk del --no-network .gosu-deps; \
    \
    chmod +x /usr/local/bin/gosu; \
# verify that the binary works
    gosu --version; \
    gosu nobody true

ARG user=jenkins
ARG group=jenkins
ARG uid=1001
ARG gid=1001
ARG http_port=8080
ARG agent_port=50000
ARG JENKINS_HOME=/var/jenkins_home
ARG REF=/usr/share/jenkins/ref

ENV JENKINS_HOME $JENKINS_HOME
ENV JENKINS_SLAVE_AGENT_PORT ${agent_port}
ENV REF $REF

# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN mkdir -p $JENKINS_HOME \
  && chown ${uid}:${gid} $JENKINS_HOME \
  && addgroup -g ${gid} ${group} \
  && adduser -h "$JENKINS_HOME" -u ${uid} -G ${group} -s /bin/bash -D ${user} 

# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME $JENKINS_HOME

# $REF (defaults to `/usr/share/jenkins/ref/`) contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p ${REF}/init.groovy.d

# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.222.4}

# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=6c95721b90272949ed8802cab8a84d7429306f72b180c5babc33f5b073e1c47c

# Can be used to customize where jenkins.war gets downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war

# Could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# See https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
  && echo "${JENKINS_SHA}  /usr/share/jenkins/jenkins.war" | sha256sum -c -

ENV JENKINS_UC https://updates.jenkins.io
ENV JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental
ENV JENKINS_INCREMENTALS_REPO_MIRROR=https://repo.jenkins-ci.org/incrementals
RUN chown -R ${user} "$JENKINS_HOME" "$REF"

# For main web interface:
EXPOSE ${http_port}

# Will be used by attached agents:
EXPOSE ${agent_port}

ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log

# Download and place scripts needed to run
RUN curl https://raw.githubusercontent.com/jenkinsci/docker/master/jenkins-support -o /usr/local/bin/jenkins-support && \
    curl https://raw.githubusercontent.com/jenkinsci/docker/master/jenkins.sh -o /usr/local/bin/jenkins.sh && \
    curl https://raw.githubusercontent.com/jenkinsci/docker/master/tini-shim.sh -o /bin/tini && \
    curl https://raw.githubusercontent.com/jenkinsci/docker/master/plugins.sh -o /usr/local/bin/plugins.sh && \
    curl https://raw.githubusercontent.com/jenkinsci/docker/master/install-plugins.sh -o /usr/local/bin/install-plugins.sh

COPY --chown=${user} entrypoint.sh /entrypoint.sh

RUN chmod +x /usr/local/bin/install-plugins.sh /usr/local/bin/plugins.sh /usr/local/bin/jenkins.sh /bin/tini /usr/local/bin/jenkins-support
RUN chmod +x /entrypoint.sh

# Stay root, the entrypoint drops down to User jenkins via gosu
ENTRYPOINT ["/entrypoint.sh"]


Enter fullscreen mode Exit fullscreen mode

entrypoint.sh



#!/bin/sh

# By: Brandon Mitchell <public@bmitch.net>
# License: MIT
# Source Repo: https://github.com/sudo-bmitch/jenkins-docker

set -x

# configure script to call original entrypoint
set -- tini -- /usr/local/bin/jenkins.sh "$@"

# In Prod, this may be configured with a GID already matching the container
# allowing the container to be run directly as Jenkins. In Dev, or on unknown
# environments, run the container as root to automatically correct docker
# group in container to match the docker.sock GID mounted from the host.
if [ "$(id -u)" = "0" ]; then
  # get gid of docker socket file
  SOCK_DOCKER_GID=`ls -ng /var/run/docker.sock | cut -f3 -d' '`

  # get group of docker inside container
  CUR_DOCKER_GID=`getent group docker | cut -f3 -d: || true`

  # if they don't match, adjust
  if [ ! -z "$SOCK_DOCKER_GID" -a "$SOCK_DOCKER_GID" != "$CUR_DOCKER_GID" ]; then
    groupmod -g ${SOCK_DOCKER_GID} -o docker
  fi
  if ! groups jenkins | grep -q docker; then
    usermod -aG docker jenkins
  fi

  #If you run on MacOS
  if ! groups jenkins | grep -q staff; then
    usermod -aG staff jenkins
  fi
  # Add call to gosu to drop from root user to jenkins user
  # when running original entrypoint
  set -- gosu jenkins "$@"
fi

# replace the current pid 1 with original entrypoint
exec "$@"


Enter fullscreen mode Exit fullscreen mode

Build and run our Jenkins image:



# We need to create a directory for Jenkins to save his data to
# Since to container runs with UID:GID 1001:1001, the folder also needs to get correct permissions 
mkdir $HOME/jenkins && chown 1001:1001 $HOME/jenkins

docker build -t myjenkins .
docker run -d \
           -v $HOME/jenkins:/var/jenkins_home \ 
           -v /var/run/docker.sock:/var/run/docker.sock \
           -p 8080:8080 \
           --name jenkins \
           --network dns-bridge \
           --restart unless-stopped \
           myjenkins 


Enter fullscreen mode Exit fullscreen mode

What does this do?

  • The mount to $HOME/jenkins makes sure you don't have to reconfigure jenkins every time you kill/stop the jenkins container. It's just convenience.
  • We mount the docker.sock into Jenkins, so Jenkins can build our docker images without having it's own functioning docker-runtime. This concept is called docker in docker.
  • The rest should be self-explanatory, we publish jenkins port so we can access it from the host, attach it to the network bridge we created and make sure the container restart on failure, unless we stop it ourselves

Gitlab



#Where you want to store data
export GITLAB_HOME=$HOME/gitlab

docker run -d \
  -p 443:443 -p 80:80 -p 22:22 \
  --name gitlab \
  --network dns-bridge \
  --restart unless-stopped \
  -v $GITLAB_HOME/config:/etc/gitlab \
  -v $GITLAB_HOME/logs:/var/log/gitlab \
  -v $GITLAB_HOME/data:/var/opt/gitlab \
  gitlab/gitlab-ce:latest


Enter fullscreen mode Exit fullscreen mode

And yes, I know, this breaks the rule of "build it yourself", but I can't for the life of me get gitlab running on alpine yet, so for this demonstration I beg you to bear with me this one time.
Of course, I will update this post accordingly, once I got gitlab to work smoothly on alpine :)

If you don't want the containers to play around in your filesystem, just remove the mounts.
Note: Again, you'll have to set everything up again, if you restart the containers without their data persisted somewhere

The Gitlab omnibus image takes a while to boot, you can monitor it via docker logs -f gitlab to see when it's ready to use.
After it's booted up, you can open the Gitlab UI in your favorite browser and set a password for the user root.
Then you are ready to create some repos :)

More info on the Gitlab Docker Images can be found here

Demo Repository

Now that our shiny new Gitlab is running, let's create a demo repository here and commit the following demo Dockerfile to it:



FROM alpine:latest

RUN echo '#!/bin/sh' > /tmp/hello.sh && \
    echo 'echo "Hello $1"' >> /tmp/hello.sh && \
    chmod +x /tmp/hello.sh && \ 
    chown nobody. /tmp/hello.sh 

USER nobody
ENTRYPOINT ["/bin/sh"]
CMD ["/tmp/hello.sh", "Me!"]


Enter fullscreen mode Exit fullscreen mode

This is a very simple container that just echos a greeting, which will suffice to demonstrate our Git-Hook.

Add, commit and push the file for now.

Demo-Job

Now is the time to go back to our Jenkins demo-job, we wanted to set up earlier.

Now that we have source to build, we can use it in our job.
Create a pipeline with your favorite name here with the following content:



node {
    stage("Git Checkout") {
        git credentialsId: 'gitlab', url: 'http://gitlab/root/demo'
    }
    stage("Docker Build") {
        app = docker.build('awesome-image')
    }
    stage("Docker Push") {
        docker.withRegistry("http://registry.local:5000") {
            app.push("awesome-tag-${env.BUILD_NUMBER}")
            app.push("latest")
        }
    }
}


Enter fullscreen mode Exit fullscreen mode

As you probably saw, we are missing two things:

  • credentials for gitlab
  • the registry "registry.local:5000"

Jenkins Gitlab Credentials

Go here and add the credentials you set for gitlab earlier:

username and password are obvious. ID is the "shortname" you will use in your pipelines. In the example it is "gitlab".
Description is just that, a human readable description for easy distinction of the various credentials you can have.

As for the rest of the configuration, like the plugins and docker runtime, you can take a look at the previous post as a guideline.

Registry

To fire up and connect our registry to our dns-bridge:



docker run -d -p 5000:5000 --name registry.local --network dns-bridge registry:2


Enter fullscreen mode Exit fullscreen mode

Since we are running docker-in-docker, the push works from our host. So our host has to know the dns-name registry.local too.
Make sure it is in our hosts with



cat /etc/hosts


Enter fullscreen mode Exit fullscreen mode

If it isn't there, just add it via



echo '127.0.0.1 registry.local' | sudo tee -a /etc/hosts


Enter fullscreen mode Exit fullscreen mode

That should fix it. Now you can trigger the build and enjoy the show!

It should be a success, and you can docker pull it anytime from our registry :)

Alright, on to the home-stretch! We are done with all the prep-work, so let's get to what we actually wanted to achieve.

Set up the Endpoint for Jenkins

To enable Jenkins-Jobs to be triggered via HTTP-Request, you need to do a few things.

First, check the following box in your job and define a secret token.

Jenkins Job-Token

Now we need an API-Token to authenticate with Jenkins in HTTP-Requests, without having to use the actual password of our admin-user.
You can create a token here
Save it somewhere you can copy it from later, because that little bugger is gone, once you navigate away from the current page.

As a nice benefit, you can monitor later on how many times the token has been used. Neat!

Set up the Webhook in Gitlab

This is it, the end of the road, we are near the finish line!

Gitlab is smart, they don't want to permit Webhooks to spam the local network of a gitlab instance to prevent damage. But we want to allow it to do so with Jenkins, because otherwise that would render this whole mechanic moot for us.
So go to the admin panel and in the "Outbound Requests" section, add "jenkins" to the "Whitelist to allow requests to the local network from hooks and services".
Don't check the box that allows all adresses in the local network! Just add jenkins to the whitelist

Now, to create a webhook, open your project and on the left hand side go to Settings -> Webhook and configure your webhook like this:

Gitlab Webhook Conf

For easy copy and paste, take this template and fill in your values:
http://:@:<8080>/job//build?token=

Final Test

Now our interconnectivity between gitlab and jenkins is automated for this job and repository. So let's test it!

Modify your Dockerfile, commit and push it and a few seconds later you should have a docker image in your local registry that reflects your change!

Outro

I have to admit, this was a little bit of work to get running locally, especially the Jenkins-DIND Image for MacOS..
But I do think this made you and me a little bit smarter, so I hope this helps you on your journey to transition to containers.

If not, shoot me a message, leave a comment or ask questions.
If there is one thing I have, it's time, patience and a talent for "making things work" or breaking them :D

Next time I'll probably write about how we can scan our images for CVEs, that draft is nearly done :)

Top comments (0)