DEV Community

Kostja Appliku.com
Kostja Appliku.com

Posted on

Setting up tests in GitLab CI for Django project with Docker Engine

Hello!

In this article I will describe how we set up GitLab CI to run tests for Django project.

But first a couple of words about what tools we were using.

What we had before GitLab – Atlassian

For a few years we were using Atlassian stack for our development process. We had JIRA, Confluence, Stash(now it is called BitBucket Server) and Bamboo.

At first we were happy with that setup, because all apps had good integration with each other.

We could see all commits related to currently open issues, and we could see what issues and commits were included in build in Bamboo.

It looked wonderful at first, but after some time we noticed that:

  • JIRA began consuming all our time even with Agile Board
  • keeping this stack up-to-date was a huge pain. I'd spend whole weekends updating and repairing all four apps, because they'd changed requirements for Mysql settings, and logs in Atlassian products are buried deep in various directories
  • Every new version of Atlassian apps was introducing numerous bugs and interface changes (which with obvious lack of QA was causing even more bugs)

Plus we only had a 16GB RAM server for all this tools.

So instead of doing our job, we were spending all our time dealing with JIRA.

New developers who were joining our team were frustrated with the JIRA interface.

At the end of last year we thought: “That's enough, we need to replace this enterprise-focused monster with something easier and productive tool.”

At first, we started looking for a replacement for task tracking functionality. We tried many apps and services for that, but all of them, while having some strong features, also had issues or lack of functionality which prevented us from being productive.

Then I tried GitLab and suddenly we not only found the solution for convenient task tracking tool, but we found how to replace the whole Atlassian stack.

What is also is amazing we get everything for free! (Except worker, but i have unused linux box, which now serves as worker).

GitLab has a clean and simple interface, instead of the enterprise-crazy workflows of Jira, issues have labels, there is CI pipelines, Wiki, markdown for issue descriptions, comments and wiki articles, and many other great things.

We have switched to GitLab.com

GitLab projects solved all our problems – no more maintenance, no pain with productivity-killing interface, all in one solution.

So gitlab project provides git repository hosting, issue tracker, wiki, CI/CD management system. But what we need is CI runner. Runner does actual job, executing building, testing and deployment jobs.

As I already said, I had unused linux box, which now is used for runner.

Runner installation for ubuntu described here:

https://docs.gitlab.com/runner/install/linux-repository.html

We used docker for running our builds.

Migration from Bamboo to GitLab CI/CD

In Bamboo build and deploy plans are setup from GUI.

For gitlab .gitlab-ci.yml must be created in root of git repository.

Before I provide example of this file, I must point out that we use Postgres as a database. Thanks to docker we can require runner to run it as service with credentials from this file.

.gitlab-ci.yml

image: kpavlovsky/backoffice:latest  

services: 
  - postgres:latest

stages: 
  - test  
  - deploy

variables:  
  POSTGRES_DB: dev  
  POSTGRES_USER: dev  
  POSTGRES_PASSWORD:dev  

all_tests:  
  stage: test  
  script: 
  - bash ./tests_gitlab.sh
Enter fullscreen mode Exit fullscreen mode

Line by line:

image: kpavlovsky/backoffice:latest is docker image we use to run container.

This image is based on python:3-onbuild. We moved all long-running pip install and apt-get tasks there. By doing this we achieved two things: 1) each build runs faster, because it doesn't involve packages installation 2) we do not abuse apt and pip repositories by downloading packages 10s or 100s time per day 3) we decrease time of each build so we get results of tests much faster (who wants to wait for 15 minutes after each git push ?).

In services docker images for services are listed. Here we have only postgres.

In variables section sets up postgres database and its credentials. Hostname for postgres will be ‘postgres'.

The ordering of elements in stages defines the ordering of builds' execution:

  1. Builds of the same stage are run in parallel.
  2. Builds of the next stage are run after the jobs from the previous stage complete successfully.

Job in ‘test' stage goes first. Jobs in ‘deploy' stage goes after ‘test' stage.

all_tests is a job our of pipeline, belonging to ‘test' stage. script hold all commands that will be issued. We have only one command here – to run tests.

#!/bin/bash  
coverage run --source="app1,app2,app3" manage.py test --noinput --
testrunner="xmlrunner.extra.djangotestrunner.XMLTestRunner" 
app1.tests app2.tests app3.tests  
coverage report --skip-covered
Enter fullscreen mode Exit fullscreen mode

Dockerfile for that separate kpavlovsky/backoffice:latestimage looks this way:

FROM python:3-onbuild  
ENV PYTHONUNBUFFERED 1  
ENV PYTHONDONTWRITEBYTECODE 0  
ENV DJANGO_SETTINGS_MODULE project.settings.docker  
RUN apt-get update && apt-get install -y --force-yes mc nano htop python python-pip netcat gettext && rm -rf /var/lib/apt/lists/*  
RUN mkdir /code  
WORKDIR /code  
COPY requirements.txt /code/  
RUN pip install --upgrade pip  
RUN pip install -r requirements.txt  
CMD ["bash"]
Enter fullscreen mode Exit fullscreen mode

So now if we push code to repo and tests pass, then we see notification in slack and email that pipeline is successful or that pipeline is failed if tests fail.

Deployment

Now that we migrated the build part, we need to deploy our project in case of successful build on ‘dev' branch to staging.

‘Stage' environment is another linux box, without docker, just supervisor and gunicorn.

Deployment process involves ssh-ing to remote box from runner, activating virtualenv, git pulling and running django management commands.

First step is add job in ‘deploy' stage in our .gitlab-ci.yml

deploy_stage:  
  stage: deploy  
  script: - bash ./deploy_stage_gitlab.sh  
  when: on_success  
  only: - dev</pre>
Enter fullscreen mode Exit fullscreen mode

This job will only be run on ‘dev' branch and only if ‘test' stage is successful.

To ssh to ‘stage' machine we need to transfer ssh keys to runner.

Storing keys in repository is bad practice.

Thanks to gitlab ‘Variables' we can transfer keys via environment variables, write them to files and then issue fabric command to execute required tasks on ‘stage' box.

First we need to generate ssh key without passphrase. For this purpose, use ssh-keygen .

Public key must be put in ~/.ssh/authorized_keys on stage server.

Then we put contents of public and private keys to Variables.

After adding variables with keys, Variables screen in gitlab project looks similar to this:

deploy_stage_gitlab.sh looks this way:

#!/usr/bin/env bash
mkdir ~/.ssh/  
echo $STAGE_PRIVATE_KEY > ~/.ssh/id_rsa  
echo $STAGE_PUBLIC_KEY > ~/.ssh/id_rsa.pub  
fab upd_dev
Enter fullscreen mode Exit fullscreen mode

Quick note: use fabric3 in python3.5 environment!

fabric function logs in to remote server, git pull everything, runs migrations and uses supervisorctl to restart process group for this project.

See also: allowing non-root user to user supervisorctl

Conclusion

With .gitlab_ci.yml we have ability to change pipeline, so it reflects changes in our code. That was harder to achieve with Bamboo and its build/deploy settings in GUI. Of course, we could let some bash-script to do all tasks and it would be changed from commit to commit. In this case it is impossible to add/remove/change stages, etc.

Also Bamboo's guy build/deploy setup requires much more time to set up, I didn't find a way to clone it from project to project, which is very easy to do with .gitlab_ci.yml.

From now, we don't need a separate large server for Atlassian tools, we don't spend weekends updating them. And the best part: we can focus on doing real jobs, developing and delivering OUR apps, instead of wasting time on Atlassian's.

Happy developing!

If you have any ideas about gitlab CI, ideas how to improve described workflow – post comments, i will be very happy to hear and discuss them!

Top comments (2)

Collapse
 
kanmeiban profile image
Sava Chankov

We were running GitLab CI with docker for over a year. We had to wrestle on every 10-20 builds with the runner - either it was not able to build the docker image or something got stuck. At the end we got fed up and switched to GitHub + hosted CI. Also the huge UI changes of GitLab didn't make the whole experience with it consistent.

Collapse
 
kostjapalovic profile image
Kostja Appliku.com

We are not using docker in production, so we do not build docker images during CI builds.
We build docker image manually, that will be used to run pipeline.

First project that we moved to gitlab + gitlab runner has 131 pipelines now. Not a single delay or any other kind of problems with it.

For now, I am completely satisfied with this setup.
Only one thing – we'll be moving to self-hosted gitlab soon, because cloud service(free) is not stable enough. Every time they deploy new version, our work is being interrupted. At least we were not in a need of ASAP critical production deployments, or our clients/customers/visitors/users would have to wait until GitLab goes out of 500/502/504 states.

Anyway, i am writing down notes about GitLab usage experience, and have some ideas what to improve. In several month there might be enough notes to publish another article.