DEV Community

Brian Caffey
Brian Caffey

Posted on

Using Cypress with Django and Vue for integration testing in GitLab CI

This article provides a high level description of my attempts at using Cypress for integration testing in an a Django + VueJS app using GitLab CI. Here is the GitLab repo that I will be referencing and copying code samples from below.

I have recently been working on CI/CD pipelines using GitLab CI for a project that uses Django REST Framework, Celery, Celery Beat and Django Channels for the backend with a separate static frontend site made with Quasar, a fantastic framework and component library for Vue. Here's an overview of the stages in my pipeline:

  • Documentation: Deploy a static VuePress documentation site to GitLab pages under my GitLab group.
  • Test: Linting and unit testing for python and javascript using flake8, pytest, eslint and jest.
  • Build: Build the static assets for the Vue frontend as well as the image for the backend application once they have passed all tests in the test stage. This uses a multi-stage Dockerfile discussed later on.
  • Integration: Run integration tests using the static files, the backend docker image, postgres and redis. This uses Cypress to run headlessly, but the videos of each test are recorded and stored in GitLab as job artifacts.
  • Release: Tag the image with the commit SHA and push it from GitLab's registry to the production registry (Elastic Container Registry or ECR)
  • Deploy to staging: Sync static files from the build stage to an S3 bucket that is served on a CloudFront distribution, and update the CloudFormation stack with the commit SHA to trigger ECS to make a rolling update that will use the new docker image in the different services that run in the ECS cluster (django, celery, beat and channels). Optionally run database migration tasks in ECS if there are changes in the migrations folders, and run collectstatic if there are changes in the static directories.
  • Deploy to production (not implemented yet): Same as the previous stage, but applied to the production CloudFormation stack.

First steps with Cypress

Cypress allows you to easily mock server calls with cy.server(). My first attempt at using Cypress mocked all backend calls and only tested the Vue app. This approach might be sufficient if you have a simple backend which is well tested. Here's what the GitLab CI job looked like:

.test e2e:
  image: cypress/base:10
  stage: test
  script:
    - cd frontend
    - npm install
    - apt install httping
    - npm run serve &
    - while ! httping -qc1 http://localhost:8080/login ; do sleep 1 ; done
    - $(npm bin)/cypress run
Enter fullscreen mode Exit fullscreen mode

This makes use of the official Cypress base image which includes all dependencies. npm run serve & starts the development server in the background and then waits for it to be available with httping before starting the tests.

Using docker-compose with docker-in-docker

One popular approach to integration testing uses docker-compose. Here's an example from testdriven.io that is use in a Flask/React app:

# run e2e tests
e2e() {
  docker-compose -f docker-compose-stage.yml up -d --build
  docker-compose -f docker-compose-stage.yml exec users python manage.py recreate_db
  ./node_modules/.bin/cypress run --config baseUrl=http://localhost --env REACT_APP_API_GATEWAY_URL=$REACT_APP_API_GATEWAY_URL,LOAD_BALANCER_DNS_NAME=$LOAD_BALANCER_DNS_NAME
  inspect $? e2e
  docker-compose -f docker-compose-$1.yml down
}
Enter fullscreen mode Exit fullscreen mode

The approach here is to:

  1. Start services
  2. Seed databases
  3. Run cypress tests against the docker-compose stack

This approach allows us to separate each part of our application into its own container. It also allows us to easily run our integration tests locally using docker-compose.

I adapted something similar to this approach. Here's how I put it together. First, here's the GitLab CI job definition:

e2e cypress tests with docker-compose:
  stage: integration
  image: docker:stable
  variables:
    DOCKER_HOST: tcp://docker:2375
    DOCKER_DRIVER: overlay2
  services:
    - docker:dind
  before_script:
    - apk add --update py-pip
    - pip install docker-compose~=1.23.0
  script:
    - sh integration-tests.sh
  artifacts:
    paths:
      - cypress/videos/
      - tests/screenshots/
    expire_in: 7 days
Enter fullscreen mode Exit fullscreen mode

There is a lot of setup in this job definition, but the script stage is where everything happens. Here is integration-tests.sh:

#!/bin/bash

set -e

echo "Starting services"
docker-compose -f docker-compose.ci.yml up -d --build

echo "Running tests"
docker-compose -f docker-compose.ci.yml -f cypress.yml up --exit-code-from cypress

echo "Tests passed. Stopping docker compose..."
docker-compose -f docker-compose.ci.yml -f cypress.yml down
Enter fullscreen mode Exit fullscreen mode

Using --exit-code-from is a useful flag that allows us to run Cypress in a separate container defined in a separate docker-compose file, and exit from the docker-compose command based on the exit code from the cypress container, which should be 0 if the tests pass successfully. If Cypress fails, this script will exit with a non-zero exit code because of set -e.

Here's the cypress.yml file:

version: '3.7'
services:
  cypress:
    image: "cypress/included:3.4.0"
    container_name: cypress
    networks:
      - main
    depends_on:
      - nginx
    environment:
      - CYPRESS_baseUrl=http://nginx
    working_dir: /e2e
    volumes:
      - ./:/e2e
Enter fullscreen mode Exit fullscreen mode

The cypress/included:3.4.0 image already has Cypress installed, and it's default command is to run Cypress, so we don't need to define command.

We use http://nginx as the baseUrl for Cypress because we are reaching out to the nginx container which serves our Vue application. The nginx app then reaches out the the backend container by making requests to http://backend.

Here's the docker-compose.ci.yml file:

version: '3.7'
services:
  postgres:
    container_name: postgres
    image: postgres
    networks:
      - main
    volumes:
      - pg-data:/var/lib/postgresql/data

  backend: &backend
    container_name: backend
    build:
      context: ./backend
      dockerfile: scripts/prod/Dockerfile
    command: /start_ci.sh
    networks:
      - main
    volumes:
      - ./backend:/code
      - django-static:/code/static
    depends_on:
      - postgres
    environment:
      - SECRET_KEY='secret'
      - DEBUG=True
      - DJANGO_SETTINGS_MODULE=backend.settings.gitlab-ci

  asgiserver:
    <<: *backend
    container_name: asgiserver
    entrypoint: /start_asgi.sh
    volumes:
      - ./backend:/code

  nginx:
    container_name: nginx
    build:
      context: .
      dockerfile: nginx/ci/Dockerfile
    ports:
      - 80:80
    networks:
      - main
    volumes:
      - django-static:/usr/src/app/static
    depends_on:
      - backend

  redis:
    image: redis:alpine
    container_name: redis
    volumes:
      - redis-data:/data
    networks:
      - main

volumes:
  django-static:
  portainer-data:
  pg-data:
  redis-data:

networks:
  main:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

We don't actually need to run asgiserver and backend as separate containers, but I wanted to test this way because it closely resembles the setup I plan to use in production. daphne, the server started in the asgiserver container, is capable of serving regular http requests

GitLab Services

This allowed me to run tests locally by simply running ./integration-tests.sh. While everything passed locally, the websocket test didn't pass in GitLab CI despite lots of debugging, manual waits in Cypress and other efforts. While this might work for most cases, I was interested in finding another solution that would not use docker-in-docker (dind), or docker-compose.

GitLab has a services feature that allows you to define containers to run in the CI job that can be accessed by the main container. For example, a redis service can be accessed by redis://redis:6379/0 inside the main container, similar to how networking works in docker-compose. Here's the GitLab job I defined to try to use a similar approach to the docker-compose setup, but without using docker-compose in favor of services:

.e2e: &e2e
  image: cypress/base:8
  stage: integration
  variables:
    # variables passed as env vars to *all services*
    SECRET_KEY: 'secret'
    DEBUG: ''
    DJANGO_SETTINGS_MODULE: 'backend.settings.gitlab-ci'
    CELERY_TASK_ALWAYS_EAGER: 'True'
  services:
    - name: postgres
    - name: $CI_REGISTRY_IMAGE/backend:latest
      alias: backend
      command: ["/start_ci.sh"]
    - name: redis
    - name: $CI_REGISTRY_IMAGE/frontend:latest
      alias: frontend
  before_script:
    - npm install --save-dev cypress
    - $(npm bin)/cypress verify
  script:
    - $(npm bin)/cypress run --config baseUrl=http://frontend
  after_script:
    - echo "Cypress tests complete"
  artifacts:
    paths:
      - cypress/videos/
      - cypress/screenshots/
    expire_in: 7 days
Enter fullscreen mode Exit fullscreen mode

This doesn't work. After lots of debugging and raising issues in Cypress and GitLab, I came across this merge request and found other users up against the same issue. The issue with this approach is that services are not available to other services defined in a GitLab CI job. If they are in a future release, something like this might work, but for now I'll need another way.

At this point I started to search for other projects that do Cypress testing on GitLab. Gitter is a cool example. It's a company that GitLab purchased, and it is open source. Here is the e2e CI job that inspired my next attempt at e2e cypress testing:

.test_e2e_job: &test_e2e_job
  <<: *test_job
  variables:
    <<: *test_variables
    ENABLE_FIXTURE_ENDPOINTS: 1
    DISABLE_GITHUB_API: 1
    NODE_ENV: test-docker
  script:
    # Cypress dependencies https://docs.cypress.io/guides/guides/continuous-integration.html#Dependencies
    - apt-get update -q -y
    - apt-get --yes install xvfb libgtk2.0-0 libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2
    # Create `output/assets/js/vue-ssr-server-bundle.json`
    - npm run task-js
    # Start the server and wait for it to come up
    - mkdir -p logs
    - npm start > logs/server-output.txt 2>&1 & node test/e2e/support/wait-for-server.js http://localhost:5000
    # Run the tests
    - npm run cypress -- run --env baseUrl=http://localhost:5000,apiBaseUrl=http://localhost:5000/api
  artifacts:
    when: always
    paths:
      - logs
      - test/e2e/videos
      - test/e2e/screenshots
      - cypress/logs
    expire_in: 1 day
  retry: 2
Enter fullscreen mode Exit fullscreen mode

Here's the *test_job part:

.test_job: &test_job
  <<: *node_job
  variables:
    <<: *test_variables
  stage: build_unit_test
  services:
    - name: registry.gitlab.com/gitlab-org/gitter/webapp/mongo:latest
      alias: mongo
    - name: redis:3.0.3
      alias: redis
    - name: registry.gitlab.com/gitlab-org/gitter/webapp/elasticsearch:latest
      alias: elasticsearch
    - name: neo4j:2.3
      alias: neo4j
  script:
    - make ci-test
Enter fullscreen mode Exit fullscreen mode

Let's also take a look at *node_job:

.node_job: &node_job
  image: registry.gitlab.com/gitlab-org/gitter/webapp
  before_script:
    - node --version
    - npm --version
    - npm config set prefer-offline true
    - npm config set cache /npm_cache
    - mv /app/node_modules ./node_modules
    - npm install
  artifacts:
    expire_in: 31d
    when: always
    paths:
      - /npm_cache/
      - npm_cache/
Enter fullscreen mode Exit fullscreen mode

There's a lot going on in this CI job. If you haven't used YAML anchors before, the idea is that <<: *job let's us reference keys that have job: &job. See this article for more information on YAML anchors. Let's merge the anchors into one key for readability:

.test_e2e_job: &test_e2e_job
  image: registry.gitlab.com/gitlab-org/gitter/webapp
  before_script:
    - node --version
    - npm --version
    - npm config set prefer-offline true
    - npm config set cache /npm_cache
    - mv /app/node_modules ./node_modules
    - npm install
  variables:
    <<: *test_variables
    ENABLE_FIXTURE_ENDPOINTS: 1
    DISABLE_GITHUB_API: 1
    NODE_ENV: test-docker
  services:
    - name: registry.gitlab.com/gitlab-org/gitter/webapp/mongo:latest
      alias: mongo
    - name: redis:3.0.3
      alias: redis
    - name: registry.gitlab.com/gitlab-org/gitter/webapp/elasticsearch:latest
      alias: elasticsearch
    - name: neo4j:2.3
      alias: neo4j
  script:
    # Cypress dependencies https://docs.cypress.io/guides/guides/continuous-integration.html#Dependencies
    - apt-get update -q -y
    - apt-get --yes install xvfb libgtk2.0-0 libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2
    # Create `output/assets/js/vue-ssr-server-bundle.json`
    - npm run task-js
    # Start the server and wait for it to come up
    - mkdir -p logs
    - npm start > logs/server-output.txt 2>&1 & node test/e2e/support/wait-for-server.js http://localhost:5000
    # Run the tests
    - npm run cypress -- run --env baseUrl=http://localhost:5000,apiBaseUrl=http://localhost:5000/api
  artifacts:
    when: always
    paths:
      - logs
      - test/e2e/videos
      - test/e2e/screenshots
      - cypress/logs
    expire_in: 1 day
  retry: 2
Enter fullscreen mode Exit fullscreen mode

Here are some import points to mention about this job:

  • This job starts with a base image of the main webapp container (an express application).

  • Supporting services are defined in the services section: mongo, redis, elasticsearch and neo4j. But there is no communication between these services; there is only communication between the webapp container and the individual services.

  • Instead of starting from a Cypress image, Cypress and its dependencies are installed in the container in the script section. Cypress is installed in devDependencies in package.json.

  • The job is set to retry two times. Sometimes e2e tests can be flaky. I have definitely noticed this in my experience with Cypress.

Now let's take a look at my approach that I adopted from this example. There are two main parts: the GitLab CI job, and the multi-stage Dockerfile. I need to serve the backend Django application and the Vue frontend out of the same container, even though these services are separate in production. This is a perfect use case for a multi-stage Dockerfile. Here's an overview of the stages in my Dockerfile:

  1. Build the static assets
  2. Build the production backend docker image
  3. Starting FROM the production image, COPY the Vue application into the static folder and install Cypress dependencies.

Here's the Dockerfile:

# build stage that generates quasar assets
FROM node:10-alpine as build-stage
ENV HTTP_PROTOCOL http
ENV WS_PROTOCOL ws
ENV DOMAIN_NAME localhost:9000
WORKDIR /app/
COPY quasar/package.json /app/
RUN npm cache verify
RUN npm install -g @quasar/cli
RUN npm install --progress=false
COPY quasar /app/
RUN quasar build -m pwa

# this image is tagged and pushed to the production registry (such as ECR)
FROM python:3.7 as production
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
RUN mkdir /code
WORKDIR /code
COPY backend/requirements/base.txt /code/requirements/
RUN python3 -m pip install --upgrade pip
RUN pip install -r requirements/base.txt
COPY backend/scripts/prod/start_prod.sh \
    backend/scripts/dev/start_ci.sh \
    backend/scripts/dev/start_asgi.sh \
    /
ADD backend /code/

# this stage is used for integration testing
FROM production as gitlab-ci
# update and install nodejs
COPY --from=build-stage /app/dist/pwa/index.html /code/templates/
COPY --from=build-stage /app/dist/pwa /static
COPY cypress.json /code
RUN mkdir /code/cypress
COPY cypress/ /code/cypress/
RUN apt-get -qq update && apt-get -y install nodejs npm
RUN node -v
RUN npm -v
# cypress dependencies
RUN apt-get -qq install -y xvfb \
  libgtk-3-dev \
  libnotify-dev \
  libgconf-2-4 \
  libnss3 \
  libxss1 \
  libasound2
Enter fullscreen mode Exit fullscreen mode

Now let's look at the GitLab CI job:

e2e: &cypress
  stage: integration
  image: $CI_REGISTRY_IMAGE/backend:latest
  services:
    - postgres:latest
    - redis:latest
  variables:
    SECRET_KEY: 'secret'
    DEBUG: 'True'
    CELERY_TASK_ALWAYS_EAGER: 'True'
  before_script:
    - python backend/manage.py migrate
    - python backend/manage.py create_default_user
    - cp /static/index.html backend/templates/
    - /start_asgi.sh &
  script:
    - npm install cypress
    - cp cypress.json backend/
    - cp -r cypress/ backend/cypress
    - cd backend
    - $(npm bin)/cypress run
  artifacts:
    paths:
      - backend/cypress/videos/
      - backend/cypress/screenshots/
    expire_in: 7 days
Enter fullscreen mode Exit fullscreen mode

This jobs starts from the backend:latest image created by the Dockerfile above. It references postgres and redis services. The before_script runs a database migration, seeds the database with a user, and copies index.html to the templates folder. Finally, we run start_asgi.sh & in the background. Next, we install cypress and run the tests.

Instead of using different containers for the Django backend, celery and daphne services (the Django Channels ASGI server), we can use only daphne to serve both HTTP and Websocket traffic, and we can set CELERY_TASK_ALWAYS_EAGER to True so that celery tasks are run synchronously in our e2e tests. We can add the following to our urls.py to serve index.html and other static files to serve the Vue application from our Django container:

if settings.DEBUG:
    import debug_toolbar # noqa
    urlpatterns = urlpatterns + [
        path('', index_view, name='index'),
        path('admin/__debug__/', include(debug_toolbar.urls)),
        # catch all rule so that we can navigate to
        # routes in vue app other than "/"
        re_path(r'^(?!js)(?!css)(?!statics)(?!fonts)(?!service\-worker\.js)(?!manifest\.json)(?!precache).*', index_view, name='index') # noqa
    ] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
Enter fullscreen mode Exit fullscreen mode

We also set STATIC_ROOT to /, and disable CsrfViewMiddleware for simplicity. These settings can be found in gitlab-ci.py.

Here's the simple index_view that serves the index.html file on requests to /, or any other path that does not start with anything in our STATIC folder. This can be found in core/views.py:

# Serve Vue Application via template for GitLab CI
index_view = never_cache(TemplateView.as_view(template_name='index.html'))
Enter fullscreen mode Exit fullscreen mode

One feature of GitLab CI I really like is GitLab runner. It is another open-source project that allows us to run GitLab CI jobs locally in the same way they run when you push your code to gitlab.com and trigger a job on a public runner. This is really useful for when you are debugging a CI job locally and don't want to keep pushing code to gitlab.com to run the pipeline.

In the last part of this article I wan't to describe how we can test the GitLab CI job locally using GitLab runner.

There is really one change we need to make in order to run this job locally. Let's define a new job that uses the anchor for the existing job, but overwrite the image key:

# use this test with gitlab-runner locally
e2e-local:
  <<: *cypress
  image: localhost:5000/backend:latest
Enter fullscreen mode Exit fullscreen mode

In my repo, this job is commented by placing a period in front of the job name (.e2e-local). I don't want to ever run this job in production, so I need to uncomment the job when running locally and then recomment it when I want to push code to GitLab.

There are just a few steps need to test locally: setup a local registry, build the image, tag the image, and push it to the local registry. Here's how to do that. Run the following command (taken from docker documentation):

docker run -d -p 5000:5000 --restart=always --name registry registry:2
Enter fullscreen mode Exit fullscreen mode

To build the production image that we will use in the test, run the following command:

docker-compose -f compose/test.yml build backend
Enter fullscreen mode Exit fullscreen mode

Then tag the image with the following command:

docker tag compose_backend:latest localhost:5000/backend:latest
Enter fullscreen mode Exit fullscreen mode

Then push the tagged image to the local registry:

docker push localhost:5000/backend:latest
Enter fullscreen mode Exit fullscreen mode

Finally, commit any current changes you have made. Gitlab runner requires that you commit changes before running tests. Run the GitLab CI job with the following command:

gitlab-runner exec docker e2e-local
Enter fullscreen mode Exit fullscreen mode

Conclusion

That's a quick tour of the CI/CD pipeline I'm working on with a close look at the integration stage. I would be very interested to hear about anyone else's way of going integration testing in a project with a similar tech stack. If you have any suggestions for how I could improve the way I'm doing integration tests, I would love to hear your thoughts! Thanks for reading.

Top comments (4)

Collapse
 
aaronm14 profile image
Aaron Mead

@briancaffey Thanks for all of this (even though I am late to the game here).

I'm curious about what your python command recreate_db does. Is that a custom command or a library? What I'm struggling to do is figure out how to handle spinning up the database and shutting it down so that I can seed it from Cypress.

Collapse
 
briancaffey profile image
Brian Caffey

Hi, it has been a while since I looked at this, but I think that recreate_db is from another repo I was referencing: github.com/testdrivenio/testdriven.... There is another line from my code docker-compose -f docker-compose.ci.yml up -d --build that starts the database and runs migrations.

I think I removed the start_ci.sh file that is referenced by docker-compose.ci.yml, but this is where the database migration command would have been called, as well as any other custom management command used to seed data. In this way, I'm seeding the database from docker-compose commands, not from Cypress. Did you want to seed the database from Cypress itself?

Collapse
 
aaronm14 profile image
Aaron Mead

Ugh, I accidentally reloaded the page after having a more thoughtful response here, ha. But know that your references helped.

I ultimately used a shell script to drop & recreate db, then a custom Django command to seed it using Django test factories already being used by an existing Selenium test suite.

Collapse
 
briancaffey profile image
Brian Caffey

Also, I should mention that I have recently done lots of refactoring on this project and I am actively working on restructuring some of the Cypress-related code. If you are looking at this repo: gitlab.com/verbose-equals-true/dja..., it might be good look at the files from a commit with a date close to the time that this article was published, and keep an eye on it for updates! Hopefully I'll be coming back to the Cypress tests soon.