DEV Community

Cover image for Building Docker Containers for our Rails Apps
Matthew Chigira for Scout APM

Posted on • Updated on • Originally published at scoutapm.com

Building Docker Containers for our Rails Apps

This post originally appeared on the Scout blog.

In a recent post, we talked about the 8 things that you know about Docker containers, and what you should know about them. Hopefully we cleared up any confusion you might have had about the Docker ecosystem. Perhaps with all that talk, it got you thinking about trying it out on one of your own applications? Well in this post we’d like to show you how easy it is to take your existing Ruby on Rails applications and run them inside a container. So, let’s assume you have an existing Rails project with a PostgreSQL database, and let’s walk you through the steps it would take to run this in a container instead. It’s a lot easier than you probably think!

Creating the Dockerfile

The first thing that we need to do to get our application to run in a container is to define our custom image that we will run as a container, and we can do this in a Dockerfile. This Dockerfile is essentially a set of instructions for Docker to use when it builds our container image. The idea is that this file is all that is required to produce an identical container on any system, and so we can add this file to source control so that everybody in our team can utilize it. Let’s create the following file called "Dockerfile" and place it inside our project’s root directory:

# Start from the official ruby image, then update and install JS & DB
FROM ruby:2.6.2
RUN apt-get update -qq && apt-get install -y nodejs postgresql-client

# Create a directory for the application and use it
RUN mkdir /myapp
WORKDIR /myapp

# Gemfile and lock file need to be present, they'll be overwritten immediately
COPY Gemfile /myapp/Gemfile
COPY Gemfile.lock /myapp/Gemfile.lock

# Install gem dependencies
RUN bundle install
COPY . /myapp

# This script runs every time the container is created, necessary for rails
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 3000

# Start rails
CMD ["rails", "server", "-b", "0.0.0.0"]
Enter fullscreen mode Exit fullscreen mode

Let’s take a step-by-step look at this file to understand exactly what we are asking Docker to do. With the ‘FROM’ statement, we start from an official ruby Docker image (hosted on Docker Hub), and copy this into a brand new image. Inside this new image, we call ‘RUN’ which updates and installs a JavaScript runtime and a PostgreSQL DB client inside our new image. With the next ‘RUN’ command, we create a directory inside our image called myapp (note that you should change references of "myapp" to your application’s directory name) and then we set this as the working directory of our image. The next step is to use ‘COPY’ to get our Gemfile and Gemfile.lock files from our project into this image. We need these files to be present so that we can do ‘RUN bundle install’ and install all the gems into this image. We then use ‘COPY’ to copy our entire project into the image. The next four lines are specific to Rails projects to allow them to run correctly in containers, don’t worry too much if you don’t understand this part. But we will need to add the file shown below to our projects directory and call it entrypoint.sh for this to work. The final line of the Dockerfile, ‘CMD’, will kick off the Rails server when the image is ran in a container.

This is the entrypoint.sh file that we need to create and add to our project:

#!/bin/bash

# Rails-specific issue, deletes a pre-existing server, if it exists
set -e
rm -f /myapp/tmp/pids/server.pid
exec "$@"
Enter fullscreen mode Exit fullscreen mode

So using this Dockerfile and the entrypoint.sh script, we can build our image with a single command. But you might have noticed that we haven’t yet specified our PostgreSQL database details yet. Database engines usually run in their own containers, separate from your web application. The good news is that we don’t have to define a custom image with a Dockerfile for this database container, we can just use a standard PostgreSQL image from Docker Hub and use it as it is. But it does still mean that we will have two separate containers that need to communicate with each other. How do we do that? That’s where Docker Compose comes in.

Creating the docker-compose.yml file

Docker Compose is a tool that allows us to connect multiple containers together into a multi-container environment that we can think of as a service. For example, in our situation we have a database engine container and a rails environment container. So we could use Docker Compose to combine these into a multi-container environment which we can conceptually view as our complete application. To use Docker Compose, we need to create a docker-compose.yml file, in addition to the Dockerfile that we already have and place this in your project’s directory. This file will tie together the rails container we previously defined in the Dockerfile (let’s call that "web"), and another container for the database which we will call "db":

version: '3'
services:
  db:
    image: postgres
    volumes:
      - ./tmp/db:/var/lib/postgresql/data
  web:
    build: .
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    volumes:
      - .:/myapp
    ports:
      - "3000:3000"
    depends_on:
      - db
Enter fullscreen mode Exit fullscreen mode

As you can see, for the db part, we just specify the ‘postgres’ image from Docker hub and then mount the location of our database into the container. For the web part however, we build the image defined in our Dockerfile, run some commands and mount our application code into the container. Note that if you are using a system that enforces SELinux (such as Red Hat, Fedora or CentOS), then you will need to append a special :z flag to the end of your volume paths.

Putting it all together

So now that we have a Dockerfile, docker-compose.yml and entrypoint.sh script in our project’s directory, there are just a few more steps we need to do before we build our image and run the application as a container.

First of all, it is a good idea to clear out the contents of our Gemfile.lock file before we proceed:

$ rm Gemfile.lock
$ touch Gemfile.lock
Enter fullscreen mode Exit fullscreen mode

Next, we need to update the database settings, as the credentials for the PostgreSQL image differ from the credentials you would use on your local install. The main difference is that here we specify our ‘db’ container as the host. You can make these changes to the relevant part of your config/database.rb file:

default: &default                                                               
  adapter: postgresql                                                           
  encoding: unicode                                                             
  host: db                                                                      
  username: postgres                                                            
  password:   
Enter fullscreen mode Exit fullscreen mode

The next step is to build our custom image that we defined in the Dockerfile, we can do that like this:

$ docker-compose build
Enter fullscreen mode Exit fullscreen mode

Now we have an image for our web application, we need to prepare our database (inside the container).

$ docker-compose run web rake db:create
$ docker-compose run web rake db:migrate
Enter fullscreen mode Exit fullscreen mode

Running in a container

And that’s it! We’re done! All you need to do now to run you entire application (this time, and in future) is this one command:

$ docker-compose up
Enter fullscreen mode Exit fullscreen mode

When you are finished, to shut down correctly and remove your containers, you run:

$ docker-compose down
Enter fullscreen mode Exit fullscreen mode

Top comments (0)