loading...
Cover image for DinD with Gitlab CI

DinD with Gitlab CI

hmajid2301 profile image Haseeb Majid Originally published at haseebmajid.dev Updated on ・5 min read

Like most developers, we want to be able to automate as many and as much of processes as possible. Pushing Docker images to a registry is a task that can easily be automated. In this article, we will cover how you can use Gitlab CI to build and publish your Docker images, to the Gitlab registry. However, you can also very easily edit this to push your images to DockerHub as well.

A quick aside on terminology related to Docker:

  • container: An instance of an image is called a container (docker run)
  • image: A set of immutable layers (docker build)
  • hub: The official registry where you can get more Docker images from (docker pull)

Example

Here is an example .gitlab-ci.yml file which can be used to build and push your Docker images to the Gitlab registry.

variables:
  DOCKER_DRIVER: overlay2

services:
  - docker:dind

stages:
  - publish

publish-docker:
  stage: publish
  image: docker
  script:
    - export VERSION_TAG=v1.2.3
    - docker login ${CI_REGISTRY} -u gitlab-ci-token -p ${CI_BUILD_TOKEN}
    - docker build -t ${CI_REGISTRY_IMAGE}:latest -t ${CI_REGISTRY_IMAGE}:${VERSION_TAG}  .
    - docker push ${CI_REGISTRY_IMAGE}:latest
    - docker push ${CI_REGISTRY_IMAGE}:${VERSION_TAG}
Enter fullscreen mode Exit fullscreen mode

Explained

The code above may be a bit confusing, it might be a lot to take in. So now we will break it down line by line.

variables:
  DOCKER_DRIVER: overlay2
Enter fullscreen mode Exit fullscreen mode

In our first couple of lines, we define some variables which will be used by all our jobs (the variables are global). We define a variable DOCKER_DRIVER: overlay2, this helps speed our Docker containers a bit because by default it uses vfs which is slower learn more here.

random-job:
  stage: publish
  variables:
    DOCKER_DRIVER: overlay2
  script:
    - echo "HELLO"
Enter fullscreen mode Exit fullscreen mode

Note we could just as easily define variables just within our job as well like you see in the example above.

services:
  - docker:dind
Enter fullscreen mode Exit fullscreen mode

The next couple of lines define a service. A service is a Docker image which links during our job(s). Again in this example, it is defined globally and will link to all of our jobs. We could very easily define it within our job just like in the variables example. The docker:dind image automatically using its entrypoint starts a docker daemon. We need to use this daemon to build/push our Docker images within CI.

The docker:dind (dind = Docker in Docker) image is almost identical to the docker image. The difference being the dind image starts a Docker daemon. In this example, the job will use the docker image as the client and connect to the daemon running in this container.

We could also just use the dind image in our job and simply start dockerd (& = in the background) in the first line. The dockerd command starts the Docker daemon as a client, so we can then communicate with the other Docker daemon. It would achieve the same outcome. I think the service approach is a bit cleaner but as already stated either approach would work.

publish-docker:
  stage: publish
  image: docker:dind
  script:
    - dockerd &
    ...
    - docker push ${CI_REGISTRY_IMAGE}:${VERSION_TAG}
Enter fullscreen mode Exit fullscreen mode

Info: One common use case of Gitlab CI services is to spin up databases like MySQL. We can then connect to it within our job, run our tests. It can simplify our jobs by quite a bit.

Note: There are several other ways we could also build/push our images. This is the recommended approach.

stages:
  - publish
Enter fullscreen mode Exit fullscreen mode

Next, we define our stages and give them names. Each job must have a valid stage attached to it. Stages are used to determine when a job will be run in our CI pipeline. If two jobs have the same stage, then they will run in parallel. The stages defined earlier will run first so order does matter. However in this example, we only have one stage and one job so this isn't super important, more just something to keep in mind.

publish-docker:
  stage: publish
  ...
Enter fullscreen mode Exit fullscreen mode

Now we define our job, where publish-docker is the name of our job on Gitlab CI pipeline. We then define what stage the job should run in, in this case, this job will run during the publish stage.

publish-docker:
  ...
  image: docker
  ...
Enter fullscreen mode Exit fullscreen mode

Then we define what Docker image to use in this job. In this job, we will use the docker image. This image has all the commands we need to build and push our Docker images. It will act as the client making requests to the dind daemon.

script:
  - export VERSION_TAG=v1.2.3
  - docker login ${CI_REGISTRY} -u gitlab-ci-token -p ${CI_BUILD_TOKEN}
  - docker build -t ${CI_REGISTRY_IMAGE}:latest -t ${CI_REGISTRY_IMAGE}:${VERSION_TAG}  .
  - docker push ${CI_REGISTRY_IMAGE}:latest
  - docker push ${CI_REGISTRY_IMAGE}:${VERSION_TAG}
Enter fullscreen mode Exit fullscreen mode

Finally, we get to the real meat and potatoes of the CI file. The bit of code that builds and pushes are Docker images to the registry:

- export VERSION_TAG=v1.2.3
Enter fullscreen mode Exit fullscreen mode

It is often a good idea to tag our images, in this case, I'm using a release name. You could get this from say your setup.py or package.json file as well. In my Python projects I usually use this command export VERSION_TAG=$(cat setup.py | grep version | head -1 | awk -F= '{ print $2 }' | sed 's/[",]//g' | tr -d "'"), to parse my setup.py for the version number. But this can be whatever you want it to be. Here we have just kept it static to make things simpler but in reality, you'll probably want to retrieve it programmatically (the version number).

- docker login ${CI_REGISTRY} -u gitlab-ci-token -p ${CI_BUILD_TOKEN}
Enter fullscreen mode Exit fullscreen mode

Then we log in to our Gitlab registry, the environment variables $CI_REGISTRY and CI_BUILD_TOKEN are predefined Gitlab variables that are injected into our environment. You can read more about them here. Since we are pushing to our Gitlab registry we can just use the credentials defined within environment i.e. username=gitlab-ci-token and password a throwaway token.

Note: You can only do this on protected branches/tags.

- docker build -t ${CI_REGISTRY_IMAGE}:latest -t ${CI_REGISTRY_IMAGE}:${VERSION_TAG}  .
- docker push ${CI_REGISTRY_IMAGE}:latest
- docker push ${CI_REGISTRY_IMAGE}:${VERSION_TAG}
Enter fullscreen mode Exit fullscreen mode

Finally, we run our normal commands to build and push our images. The place where you can find your images will depend on the project name and your username but it should follow this format

registry.gitlab.com/<username>/<project_name>/<tag>
Enter fullscreen mode Exit fullscreen mode

(Optional) Push to DockerHub

- docker login -u hmajid2301 -p ${DOCKER_PASSWORD}
- export IMAGE_NAME="hmajid2301/example_project"
- docker build -t ${IMAGE_NAME}:latest -t ${IMAGE_NAME}:${VERSION_TAG}  .
- docker push ${IMAGE_NAME}:latest
- docker push ${IMAGE_NAME}:${VERSION_TAG}
Enter fullscreen mode Exit fullscreen mode

We can also push our images to DockerHub, with the code above. We need to first login to DockerHub. Then change the name of our image <username>/<project_name>.

Appendix

Discussion

pic
Editor guide
Collapse
phuihock profile image
Phui-Hock

Hi,
Do you deploy your image into a testing environment? If you do, can you share how do you manage manage dynamic environment URL, setup, testing and tear down of the environment?

I tried to follow the official guide to setup a CI/CD on an on-prem server. Did not get very far. Kubernetes is too hard for me.

Collapse
hmajid2301 profile image
Haseeb Majid Author

Hi,

So where I work the way we deploy stuff is still a bit old school, we use Debian packages.
But in terms of test environment. Usually it's all done manually i.e. the installation. What exactly do you need to do ? You want to spin up a test server and then test your container within that environment ?

Collapse
phuihock profile image
Phui-Hock

Yes, exactly. I had a gitlab shell runner that builds and deploys to a staging environment as part of the CICD pipeline. When I am done, I can stop the environment and it would destroy all the test artifacts and restore the db. But it was fragile and maintenance heavy.

Thread Thread
hmajid2301 profile image
Haseeb Majid Author

And what was on this test environment ?

Thread Thread
phuihock profile image
Phui-Hock

The whole project source code. One of the jobs of the shell runner is git pull and git checkout the latest tag and docker-compose up the environment.

Thread Thread
hmajid2301 profile image
Haseeb Majid Author

Is this where you run your tests ?

Thread Thread
phuihock profile image
Phui-Hock

Yeah.

Thread Thread
hmajid2301 profile image
Haseeb Majid Author

Hmm well I have run docker compose with a docker container before within gitlab ci. Would this be helpful I could show you how I did it ?

Thread Thread
phuihock profile image
Phui-Hock

Sure, if it is not too much to ask. Examples on how to use docker/compose/swarm with gitlab CICD are scarce. I could use more advice on how to do it correctly.

Thanks in advance!

Thread Thread
hmajid2301 profile image
Haseeb Majid Author

I use it for integration tests, where within the docker image in gitlab ci I create a container which has all of my code. I mount my tests and install my deps. Then within my tests it uses Python libraries to start and stop container using my docker-compose.yml file. Here is a gist gist.github.com/hmajid2301/592b3cd...

Collapse
0x12b profile image
Simon Aronsson

Great article! I could have benefited from a note in the beginning explaining what DinD means though, and I've been doing docker work for many years, so I can only imagine the confusion for a new developer 😅

Collapse
hmajid2301 profile image
Haseeb Majid Author

Hahhaah good point I'm surprised I missed it.

Collapse
raphink profile image
Raphaël Pinson

Another posibility is using buildah and skopeo, which allows you to build and push images without using a docker daemon, so you don't need a DinD approach anymore.

Collapse
hmajid2301 profile image
Haseeb Majid Author

I've never heard of builah and skopeo. I will look it up. Thanks for the heads up.

Collapse
starpebble profile image
starpebble

This is huge. Each CI/CD is asking us all to create docker images differently. Docked in docker looks beautiful. Keep going and make it the standard. Thanks.