DEV Community

Cover image for Dockerize the multi-services application for local development
Dmitry Salahutdinov for Amplifr.com

Posted on • Updated on

Dockerize the multi-services application for local development

As nowadays many complex web-applications run on production containerized, we keep developing them in 'old-school' way, installing the Postgresql, Redis, Ruby and other components on the local development machine.

It is getting harder to maintain the process of development, especially when the system becomes heterogeneous and expands into a large number of services, running with various versions of the dependent components. It becomes especially actual when the dependent component's versions are varied.

In this article, I am going to review the local development containerization with the example of the Amplifr, the project I work on. With the help of docker-compose and docker networks, it's easy and efficient.

As all the infrastructure are containerized, and are managed with Kubernetes on production, we will attend the setting up local development only, following the one principle - the convenience of the development process.

Benefits of the local containerization

  • No need to install all the components, such as database engines, language interpreters on the local machine. It keeps local machine clean.
  • natural support of the different environments, g.e. running the services with different versions of Ruby, of Postgresql on the local machine

Project overview

However Amplifr's backend is running on Rails, the project also has the complicated frontend, serving by the standalone Node.js server and the Logux web-socket server, and other helper-services, written on Node.js, Ruby, and Golang.

The following picture shows the simplified architecture of the project:

Overall Amplifr's services map

I am going to quickly review some components of the overall system.

Backend server

The backend is the classic Rails-application, doing all the business logic and performing many background jobs with Sidekiq.

Frontend server

The frontend is the only public HTTP entry-point for the overall application. It serves the frontend assets and proxies other requests to the Rails backend.
The backend is also integrated back with frontend server for sharing some data, like browsers.json file for proper rendering of HTML.

Frontend-backend integration

Logux server

The Logux is the server exposing the web-socket port, holding the bidirectional connection with the client's browsers. To perform the business logic, it has two ways of HTTP integration with the backend. It allows us to keep all the business logic in Rails-backend, and sending back the notifications from backend by hitting Logux with HTTP.

Logux-backend integration

"Link shortener"

The link shortener is the specific web-service, written with Golang. It aims for shortening a link, expanding them and manages the overall statistics about link expansions.
Link shortener server integration with backend

"Preview" service

The preview is the public service, used from the client browsers to render the OpenGraph representation of any link. It has public http endpoint only.

Other components

Shortener - is the standalone service for shorting the url and keeping analytics data about link expansion. It is written with Golang. It has the external public endpoint to expand the shorted links, and internal endpoint to short the links while publication social content within backend's background jobs.

And some other internal services, such as telegram and facebook bot, which have the backend only integrations.

Components dependents

Most of the components are by itself the complex web-services, depending on underlying components, such as Postgres, Redis, and other services low-level system services.
Internals of the backend component and how we are going to dockerize it

Containarization

đź’ˇWe will containerize each service separately with the Docker Compose. It is a tool for defining and running multi-container Docker applications, making it easy to start just by running up all the services together with only one command:

docker-compose up
Enter fullscreen mode Exit fullscreen mode

đź’ˇTo make the services to integrate we will use the docker networks, that allows any docker containers to communicate with each other. We will use only one internal docker network for all the components for simplicity. Being more accurate a reader will be able to set up the individual network for every service dependents and for every group of the connectivity.

Dockerize Ruby Backend

Here we have the standard stack: Postgres, Redis, Rails web-server and the Sidekiq background. For all of them, we'll define the services in docker-compose.yaml.

Here are the key points:

  • for the Postgres and Redis, we will define the persistent volumes to save the data between the runs
  • we are not going to copy the Ruby source code into the container, instead of this - we will mount the Rails application source-code to the /app folder
  • we also will define the persistent storage for the bundle and other stuff to increase the next time starts
  • we will define the amplifr_internal network and add the interacting containers to that network
  • the application should be ready to be configured with the environments variables, which we are going to set up in docker-compose file
  • we will define the base application service in YAML file and then will use the Anchors and aliases of the YAML syntax not to repeat yourself.

âť—Keep in mind, that this configuration differs from the way of building docker image for production, where all the source code and all the dependency bundles are copied inside the docker image, to let it be all-sufficient and not to have external dependencies!

Here is the full gist with all the config, but let me pay attention to the main points:

Describe the base-service to inherit from it

services:
  app: &app
    build:
      context: .
      dockerfile: Dockerfile.dev
      args:
        PG_VERSION: '9.6'
    image: amplifr-dev:0.1.0
    volumes:
      - .:/app:cached
      - bundle:/bundle
    environment:
      # environment settings
      - BUNDLE_PATH=/bundle
      - BUNDLE_CONFIG=/app/.bundle/config
      - RAILS_ENV=${RAILS_ENV:-development}

      - DATABASE_URL=postgresql://postgres@postgres/amplifr_${RAILS_ENV}
      - REDIS_URL=redis://redis:6379/

      # service integrations
      - FRONTEND_URL=https://frontend-server:3001/
      - LOGUX_URL=http://logux-server:31338
    depends_on:
      - postgres
      - redis
    tmpfs:
      - /tmp
Enter fullscreen mode Exit fullscreen mode

The base service's container will be build from the Dockerfile.dev with the arguments - the Postgres version. All other Ruby based images will inherit the base. Here is the service inheritance diagram:
Service inheritance

We also define the mapping of the current folder to the container's /app directory and mount the docker volume for the bundles. It prevents every time dependencies installation.

We also define two groups of the environments variables:
1) system variables, such as BUNDLE_PATH, REDIS_URL and DATABASE_URL URLs.
2) dependent services internal url for integration:
FRONTEND_URL - is the internal endpoint of the frontend server to get the supported browserslist.
LOGUX_URL - is the internal Logux HTTP endpoint for sending action from Rails-app to Logux.

Describe the 'runner'

The runner service is for running maintaining commands, such as rake tasks, or generators in Rails-environment. It is console-oriented service, so we have to go set up stdin_open and tty options, which corresponds to the -i and --t option of docker and enable bash shell for the container start:

services:
  runner:
    <<: *backend
    stdin_open: true
    tty: true
    command: /bin/bash
Enter fullscreen mode Exit fullscreen mode

We can use it in this way:

docker-compose run runner bundle exec rake db:create

# or run container and any command within the container
docker-compose run runner
Enter fullscreen mode Exit fullscreen mode

Compose the server

Define the web-server. The critical point here is that we define additional docker network internal and adds the web-server to it giving the backend-server alias to the container host in this network. So the web container will be accessible with the backend-server network name.

services:
  server:
    <<: *app
    command: bundle exec thin start
    networks:
      default:
      internal:
        aliases:
          - backend-server
    ports:
      - '3000:3000'

networks:
  internal:
Enter fullscreen mode Exit fullscreen mode

Compose the Sidekiq

Easy, it just runs the sidekiq and inherits the base service:

services:
  sidekiq:
    <<: *app
    command: sidekiq
Enter fullscreen mode Exit fullscreen mode

Compose Redis and Postgres

  postgres:
    image: postgres:9.6
    volumes:
      - postgres:/var/lib/postgresql/data
    ports:
      - 5432

  redis:
    image: redis:3.2-alpine
    volumes:
      - redis:/data
    ports:
      - 6379

volumes:
  postgres:
  redis:
Enter fullscreen mode Exit fullscreen mode

The main point here is that we mount the volumes for the container's paths, where the data is stored. It persists the data between runs.

Dockerfile

We would not dive deep into writing the Dockefile. You can find it here. Just notice, that it inherites from the standard ruby image, some required components such as Postgresql client and some other binaries to build the bundle.

Usage

The usage is quite easy:

docker-compose run runner ./bin/setup # runs the bin/setup in docker
docker-compose run runner bundle exec rake db:drop # runs rake task
docker-compose up server # get the web-server running
docker-compose up -d # runs all the services (web, sidekiq)
docker-compose up rails db # runs the postgres client
Enter fullscreen mode Exit fullscreen mode

Docker Compose also allows to specify the service dependencies and get the dependent service up if it is needed for the running service, g.e. Sidekiq requires the Redis and Postgres services to work correctly, that is why we define them in the depends_on section of the service.

And here is the service dependency diagram, showing how the services run:
Service startup sequence in case of depencencies

Summary

We've got the Rails-application running locally for the development. It works the same way as the local: persists the database, runs the rake task. Also the commands like rails db, rails c works well within a container.

The main advantage is that we can change the Postgres version or the Ruby version easily by changing one line, then rebuild image and try to run with the new environment.

Dockerize Node.js (frontend server)

The primary key points here are:

  • use the base official node docker images without any tuning
  • add the server service to the amplifr_internal network
  • define the BACKEND_URL environment variable to map to the internal docker path of the backend service.
  • mount the mode_modules volume for the Node.js modules install path
version: '3.4'

services:
  app: &app
    image: node:11
    working_dir: /app
    environment:
      - NODE_ENV=development
      - BACKEND_URL=http://backend-server:3000
    volumes:
      - .:/app:cached
      - node_modules:/app/node_modules

  runner:
    <<: *app
    command: /bin/bash
    stdin_open: true
    tty: true

  server:
    <<: *app
    command: bash -c "yarn cache clean && yarn install && yarn start"
    networks:
      default:
      amplifr_internal:
        aliases:
          - frontend-server
    ports:
      - "3001:3001"

networks:
  amplifr_internal:
    external: true

volumes:
  node_modules:
Enter fullscreen mode Exit fullscreen mode

Usage

The frontend server is now easy to start, by running:

docker-compose up server
Enter fullscreen mode Exit fullscreen mode

But it needs the backend to start first because frontend service refers to the internal network, which gets up while starting up the backend.

Dockerize the Logux server

In any simple case, Logux server has any databases dependencies and could be configured the same way as frontend. The only one difference, that Logux service has its environment variables, to set up the interaction with integrated services.

docker-compose up server # runs the server
Enter fullscreen mode Exit fullscreen mode

Dockerizing Golang (link shortener web service)

The main idea is also the same:

  • use the set up docker image with Golang, mount the application source code there and run it with the go run interpreter.
  • share the service with docker networks for integrate with Ruby backend

Our web-service has the Postgres and Redis dependencies. Lets start describing from the Dockerfile, overall config sample can be found here:

FROM golang:1.11

ARG MIGRATE_VERSION=4.0.2

# install postgres client for local development
RUN apt-get update && apt-get install -y postgresql-client

# install dep tool to ensuring dependencies
RUN go get -u github.com/golang/dep/cmd/dep

# install migrate cli for running database migrations
ADD https://github.com/golang-migrate/migrate/releases/download/v${MIGRATE_VERSION}/migrate.linux-amd64.tar.gz /tmp
RUN tar -xzf /tmp/migrate.linux-amd64.tar.gz -C /usr/local/bin && mv /usr/local/bin/migrate.linux-amd64 /usr/local/bin/migrate

ENV APP ${GOPATH}/src/github.com/evilmartians/ampgs
WORKDIR ${APP}
Enter fullscreen mode Exit fullscreen mode

Here are a couple of interesting details:

  • we install postgres-client for local development image. It simplifies the access to the database, whenever you need it: docker-compose run runner "psql $DATABASE_URL". The same we have at the Ruby backend dockerization
  • we install the dep tool to install and ensure all the dependencies: docker-compose run runner dep ensure
  • we install the migration tool to the image, to allow do database migrations right from the docker container: docker-compose run runner "migrate -source file://migrations/ -database ${DATABASE_URL} up"

‼️ The most of those tool we do not need for the production environment docker image, because it will contain only compiled binary.

We will use the same way of dockerizing to Golang service, as the Ruby service:

  • extract the base app service and the special runner service for run the maintenance tasks
  • add the Postgres and Redis dependencies with persistible data volumes

Here are the significant parts of the docker-compose.yml file:

services:
  # base service definition
  app: &app
    image: ampgs:0.3.1-development
    build:
      context: .
      dockerfile: docker/development/Dockerfile
    environment:
      REDIS_URL: redis://redis:6379/6
      DATABASE_URL: postgres://postgres:postgres@postgres:5432/ampgs
    volumes:
      - .:/go/src/github.com/evilmartians/ampgs
    depends_on:
      - redis
      - postgres

  runner:
    <<: *app

  web:
    <<: *app
    command: "go run ampgs.go"
    ports:
      - '8000:8000'
    networks:
      default:
      amplifr_internal:
        aliases:
          - ampgs-server
Enter fullscreen mode Exit fullscreen mode

Wrap up

Docker-compose is the powerful tool to simplify the managing of the complex services.
Let me review the main principles for local development dockerization in context of using docker compose:

  • mount the source code as the folder to the container instead of rebuilding docker image with the copy of source code. It helps a lot of time for every local restart
  • use the docker networks to craft the communication between services. It helps to test all the services together, but keeps their environments separately.
  • services get to know of each other by providing the environments variables to the docker container with the docker-compose

That's it. Thanks for reading!

Top comments (14)

Collapse
 
alexanderrykhlitskiy profile image
Alexander Rykhlitskiy

One more question if you don't mind. As far as I see from "- .:/app:cached", you use mac. Do your coworkers use Linux?
I'm asking because running "run" on Linux without "--user" option creates files with "root" owner. And running without this option raises error because of "- BUNDLE_PATH=/bundle". It's not allowed to write to root folder for non root users.
Do you solve it somehow? Or your entire team uses mac?

Thanks

Collapse
 
dsalahutdinov profile image
Dmitry Salahutdinov

For now, I do not have such problems, because of mac.
I think you could change BUNDLE_PATH for something less like '/app/bundle' not to be a root directory?

Do you also consider to run the docker as non-root user for linux?

Collapse
 
alexanderrykhlitskiy profile image
Alexander Rykhlitskiy • Edited

like '/app/bundle'

Yes, I do it this way now, but annoying bundle folder (though empty) gets created in app directory on host. And still no luck because bundler says / is not writable.

Do you also consider to run the docker as non-root user for linux?

You mean docker command? I run it as non-root, but there's still this issue on Linux github.com/docker/compose/issues/1...

My last set of questions :)

  1. Why do you need to set BUNDLE_PATH both in Dockerfile.dev and docker-compose.yml?
  2. When do you run bundle install? I cannot find it here gist.github.com/dsalahutdinov/2d89...

Thanks a lot for your answers!

Thread Thread
 
dsalahutdinov profile image
Dmitry Salahutdinov

Thanks for the questions, they are really make sense!

1) seems setting it in docker-compose is redundant
2) most suitable place, I think, is to leave the bundle install at bin/setup where it usually is, like here github.com/thepracticaldev/dev.to/...

Collapse
 
alexanderrykhlitskiy profile image
Alexander Rykhlitskiy

Thanks for great article! Just one small note. You mentioned example of running rake task

docker-compose up runner bundle exec rake db:create

Shouldn't it be "run" instead of "up"?

Collapse
 
dsalahutdinov profile image
Dmitry Salahutdinov

Yep, Alexarder, this is good point! fixed

Collapse
 
kamalhm profile image
Kamal

This is interesting but I find it hard to follow, if someday you write a simpler docker tutorial, I'll be so happy :D

Collapse
 
binary_maps profile image
Jadran Mestrovic

Docker network architecture covers all network scenarios required for successful communication at the local, remote or cluster level.

Collapse
 
chamnap profile image
Chamnap Chhorn • Edited

Nice article!

I'm curious about puma/sidekiq. Let's say I want to spawn them few processes, how could I do it?

Collapse
 
dsalahutdinov profile image
Dmitry Salahutdinov

Hi!
There is no problem, just run puma -w 2 or sidekiq -c 2 or whatever you want
But in most cases 1 worker is enough for local development

Collapse
 
viccw profile image
Vic Seedoubleyew

Thanks a lot for this article, it was very interesting!

I think it would benefit from having an improved English, it would make it a lot easier to read.

Thanks again though!

Collapse
 
dsalahutdinov profile image
Dmitry Salahutdinov

Thanks, try to make it better :)

Collapse
 
rommik profile image
Roman Mikhailov

Great Article! How do you debug and create code break points in the runtime. Maybe you could devote a section on this topic?

Collapse
 
alexanderrykhlitskiy profile image
Alexander Rykhlitskiy • Edited

There are two options using binding.pry:

1) Attach to running server container after hitting pry with

docker attach $(docker-compose ps | grep app_1 | awk '{print $1}')
Enter fullscreen mode Exit fullscreen mode

2) Run server with this command instead of docker-compose up

docker-compose run --service-ports app /bin/sh -c "rm -f tmp/pids/server.pid && rails s -b 0.0.0.0"
Enter fullscreen mode Exit fullscreen mode