DEV Community

Cover image for Setup Docker for integration testing in GitHub Action.
Sahanon Phisetpakasit
Sahanon Phisetpakasit

Posted on • Updated on

Setup Docker for integration testing in GitHub Action.

Nowadays, Continuous Integration helps developers to automate their own build and test in remote repositories. Moreover, when we work as a team, CI ensures that all the branches on remote repositories will be tested by automated build. So In my very first post, I would like to tell you my experience about how I create integration testing with docker and Github action.

Overview

Suppose that we have got 4 services, let’s call it service A, B, C and D respectively. Each service will have its own database system and dependencies between each service.

Service Diagram
To make us have the same picture and view for what we are going to do, we are going to test services A which have dependencies with service B, C and D on the github repository. This practice will be much easier when we perform in a local environment but in a remote environment it has something that we need to work on but trust me you will get used to this after you have failed 4 or 5 times.

Setup Github Actions

Let’s begin with setting up a github action. With this Github feature it allows you to build and test your code automatically. We can choose either to use default configure that generated by Github action or to set up our own workflow.

Configuration Settings

Once we have our choice we are going to create a configuration, The configuration will be yml file and will be created in .github/workflow directory. In this example we will use yarn as a package manager tools.



#Sample yaml configuration
name: "CI"
on:
  pull_request:
  push:
    branches:
      - master
      - "releases/*"

jobs:
  # Integration Test
  tests:
    name: "Integration Test"
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v1
      - run: yarn
      - run: yarn test


Enter fullscreen mode Exit fullscreen mode

Create Docker compose for building the service container

Assume that we already created a docker file and image for building a container, one of the main problems that I made my build failed a lot of times is the network. Normally if our docker container is standalone and doesn't have any integration with other services, it will work completely fine but in this case which involves a lot of services for testing, without a network, the services can’t communicate with each other.



# sample docker-compose.yml for service A
version: '3'
services:
  #database for service A
  service-a-db:
    image: mongo
    networks:
      - testing-network

  service-a:
    image: <your-docker-repo-name>/service-a
    restart: always
    depends_on:
      - mongo
    ports:
      - 8888:8888
    networks:
      - testing-network
    environment:
      # your environment variable

# network can be any name but make sure it use a same name.
networks:
  testing-network:


Enter fullscreen mode Exit fullscreen mode

From the sample yaml configuration, service A has one database service call service-a-db which has the same network with service A. With this configuration service A will be in the same network with service A’s database. The sample below will show all the configuration of every service in the docker-compose.yml file.



#docker-compose.yml
version: '3'
services:
  #database for service A
  service-a-db:
    image: mongo
    networks:
      - testing-network

  service-a:
    image: <your-docker-repo-name>/service-a
    restart: always
    depends_on:
      - mongo
    ports:
      - 8888:8888
    networks:
      - testing-network
    environment:
      SERVICE_D_HOST=localhost
      SERVICE_D_PORT=8080
      DB_HOST = mongo
      DB_PORT = 27017

#database for service B
  service-b-db:
    image: mongo
    networks:
      - testing-network

  service-b:
    image: <your-docker-repo-name>/service-b
    restart: always
    depends_on:
      - mongo
    ports:
      - 45000:45000
    networks:
      - testing-network
    environment:
      DB_HOST = mongo
      DB_PORT = 27017

#database for service C
  service-c-db:
    image: mongo
    networks:
      - testing-network

  service-c:
    image: <your-docker-repo-name>/service-c
    restart: always
    depends_on:
      - mongo
    ports:
      - 4000:4000
    networks:
      - testing-network
    environment:
      SERVICE_B_HOST=localhost
      SERVICE_B_PORT=45000
      DB_HOST = mongo
      DB_PORT = 27017

#database for service D
  service-d-db:
    image: mongo
    networks:
      - testing-network

  service-d:
    image: <your-docker-repo-name>/service-d
    restart: always
    depends_on:
      - mongo
    ports:
      - 8080:8080
    networks:
      - testing-network
    environment:
      SERVICE_B_HOST=localhost
      SERVICE_B_PORT=45000
      SERVICE_C_HOST=localhost
      SERVICE_C_PORT=4000
      DB_HOST = mongo
      DB_PORT = 27017

networks:
  testing-network:


Enter fullscreen mode Exit fullscreen mode

Unfortunately, only network setup in configuration won’t be enough for making a communication between services. In normal use cases, we will use ’localhost as a default host name for a service but localhost is mostly suitable for local environments which are not remotely capable. So we need to change the host name to be a service name so when it runs inside the docker environment, each service can recognize each other within the same network. For example, In the diagram, service A has a dependency with service D so we should change SERVICE_D_HOST from localhost to service-d which is a name of the service that service A integrates with. Also service A has a database service call service-a-db so we should change the DB_HOST to a service name. After we change all the host name and db name, the final outcome will look like this.



# docker-compose.yml
version: '3'
services:
  #database for service A
  service-a-db:
    image: mongo
    networks:
      - testing-network

  service-a:
    image: <your-docker-repo-name>/service-a
    restart: always
    depends_on:
      - mongo
    ports:
      - 8888:8888
    networks:
      - testing-network
    environment:
      SERVICE_D_HOST=service-d
      SERVICE_D_PORT=8080
      DB_HOST = service-a-db
      DB_PORT = 27017

#database for service B
  service-b-db:
    image: mongo
    networks:
      - testing-network

  service-b:
    image: <your-docker-repo-name>/service-b
    restart: always
    depends_on:
      - mongo
    ports:
      - 45000:45000
    networks:
      - testing-network
    environment:
      DB_HOST = service-b-db
      DB_PORT = 27017

#database for service C
  service-c-db:
    image: mongo
    networks:
      - testing-network

  service-c:
    image: <your-docker-repo-name>/service-c
    restart: always
    depends_on:
      - mongo
    ports:
      - 4000:4000
    networks:
      - testing-network
    environment:
      SERVICE_B_HOST=service-b
      SERVICE_B_PORT=45000
      DB_HOST = service-c-db
      DB_PORT = 27017

#database for service D
  service-d-db:
    image: mongo
    networks:
      - testing-network

  service-d:
    image: <your-docker-repo-name>/service-d
    restart: always
    depends_on:
      - mongo
    ports:
      - 8080:8080
    networks:
      - testing-network
    environment:
      SERVICE_B_HOST=service-b
      SERVICE_B_PORT=45000
      SERVICE_C_HOST=service-c
      SERVICE_C_PORT=4000
      DB_HOST = service-d-db
      DB_PORT = 27017

networks:
  testing-network:


Enter fullscreen mode Exit fullscreen mode

Add Docker to GitHub Actions

Now for the final part, we add docker compose file that contain all the service that we need include the tested services.



name: "CI"
on:
  pull_request:
  push:
    branches:
      - master
      - "releases/*"

jobs:
  tests:
    name: "Integration testing"
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: "Set up environment"
        run: docker compose -f docker-compose.yml up -d --wait
      - name: "Test server"
        run: docker exec -it service-a-1 yarn test 


Enter fullscreen mode Exit fullscreen mode

In this configuration we use docker compose up -d to build the container on the background -d will not freeze the terminal with a bunch of log messages from the docker build. After that we will wait 30 seconds for Docker to finish building. Actually writing a logic like this is not that efficient because maybe it could be faster or slower than 30 seconds. The best practice for this one is to use a while loop and check for a server response so it will exit the loop once the condition is fulfilled with the server response which means the docker build is finished.

After the test environment is ready, we can start our test. In the configuration above, I use yarn test as a command to test a service. So I use command docker exec -it service-a-1 yarn test to execute the test command inside the tested service which in this case is service-a (the name of the container will be followed by -1 as a default so make sure you put it behind your container name).

Conclusion

We can see that the main keypoint of configuration in order to make each service communicate with each other are:

  • network: make sure it use the same network in every service
  • host: localhost won't work if it on a remote environment so make sure it use service name instead.

With the power of GitHub Action and docker it can create the test environment on our remote repository so we don't need to deploy other services in order to use it in testing.

Hope you like this content and I'm open for taking feedback about this post so don't hesitate to leave a comment or suggestion. Thanks!😄

Top comments (4)

Collapse
 
robinvanderknaap profile image
Robin van der Knaap

Instead of waiting for 30 seconds, you can add the --wait flag to the docker compose command:

docker compose -f docker-compose.yml up -d --wait

Now the containers will still run in the background, but compose will wait until all the containers are running/healthy.

Collapse
 
sahanonp profile image
Sahanon Phisetpakasit

Thanks! This is very helpful.

Collapse
 
mystic_monk profile image
Sajan Kumar

how will github pipeline will detect that test cases are failing and it should stop now?

Collapse
 
sahanonp profile image
Sahanon Phisetpakasit

Honestly, I'm still new to GitHub pipeline 😅. But my hypothesis and some of my experience guess that it should automatically stop when the test cases are failing. Which mean our tests is not succeeded.