DEV Community

Ivan Ivashchenko
Ivan Ivashchenko

Posted on

Ruby on Rails containerization example

This article describes the process of containerizing a Ruby on Rails application to use in local development process. There are no innovative ideas here, just some specific requirements and issues we encountered, along with methods to address them. So, let's get started.

The Project. Started over 10 years ago, it is a RoR monolith responsible for both backend and frontend (SSR). It contains a considerable number of background jobs for handling long processes. Additionally, it includes a couple of engines for separate parts of the system. It is fairly well covered by tests, including many feature tests using Capybara, which required various drivers to run. At the time of starting work on containerization, local project setup was available, including through virtualization on Vagrant.

Containers requirements

  • Updating gems does not require rebuilding containers.
  • On all staging and production servers, we had a specific OS version - Ubuntu 22.04. Therefore, we wanted to reproduce this context in the container. However, this required additional configuration, as the official images with Ruby on dockerhub were based on different versions of Debian OS.
  • All existing feature tests have to be executed within the container.
  • We had a number of Ruby scripts that performed specific tasks on the servers. For example, a script for conveniently reading logs from the server. It connects to the main server via SSH and then greps logs on each instance that received requests or processed background workers. Such scripts required a working context of our application, and we wanted them to be able to run directly from the container.
  • Ability to debug the project/tests in the container.

The main steps

Since our application handles requests through a web API and contains background process handlers, it makes sense to separate these two parts into individual containers. However, the main dependencies for these parts will be identical, allowing us to use a shared configuration in the Dockerfile. Furthermore, the job container fully replicates this base configuration. The web container does have additional dependencies for the frontend part, as well as for running feature tests:

FROM ubuntu:22.04 AS build

<Install all common libraries and deps>

FROM build as web

<Install web-specific dependencies>

FROM build as job

<Just set the CMD>
Enter fullscreen mode Exit fullscreen mode

We used docker-compose to configure interaction between containers. In addition to the 2 images for our application, we also configured a container for the database and another small busybox container, which, along with a shared volume, served as a storage for our gems. This configuration allowed us to avoid rebuilding containers when adding/changing gems. Each time the services were started, they check the gems in the cache, install missing libraries, and then start the main process. For example, the command for the web container looks like this:

command: bash -c "bundle check || bundle install && rails s -b 0.0.0.0 -p 3003"
Enter fullscreen mode Exit fullscreen mode

Since we wanted to have images based on Ubuntu 22.04, we had to work with dependencies required for the project at the OS level. We installed some standard libraries like gnupg cmake g++ file, some tools necessary for installing other dependencies and local work within the container (wget postgresql-client git), and several libraries related to the specific requirements of our system (for example, imagemagick for working with images).

We used Ruby version 3.2.2 at the time of creating our configuration. We downloaded and compiled it from source:

ENV RUBY_MAJOR 3.2
ENV RUBY_VERSION 3.2.2

RUN wget -O ruby.tar.gz "https://cache.ruby-lang.org/pub/ruby/3.2/ruby-3.2.2.tar.gz"; \
    mkdir -p tmp/src/ruby; \
    tar -xzf ruby.tar.gz -C tmp/src/ruby --strip-components=1; \
    rm ruby.tar.gz; \
    cd tmp/src/ruby; \
    ./configure --disable-install-doc; \
    make; \
    make install
Enter fullscreen mode Exit fullscreen mode

Feature specs

Since our system is SSR, integration specs also verify the functionality of the frontend part by emulating user actions on the page, checking the logic of request behavior, and the functioning of various frontend elements on the pages. At the time of containerization, we used 2 drivers to run different tests - Chromedriver and Firefox, both of which needed to be present in our web container. However, it turned out that the standard libraries available in the repository and the container itself based on Ubuntu 22 were not suitable for us. In other words, standard commands like

apt-get -y --no-install-recommends install firefox
apt-get -y install chromium-driver
Enter fullscreen mode Exit fullscreen mode

installed the correponding packages but our specs didn't work with them. So, we had to customize the installation of these drivers too. The main idea - to use custom repositories as driver sources and choosing the certain lib version with apt preferences. Taking Firefox as an example:

RUN apt-get -y install software-properties-common; \
    add-apt-repository -y ppa:mozillateam/ppa
RUN echo $' \n\
Package: *\n\
Pin: release o=LP-PPA-mozillateam\n\
Pin-Priority: 1001' | tee /etc/apt/preferences.d/mozilla-firefox
RUN apt-get -y install firefox
Enter fullscreen mode Exit fullscreen mode

Another problem related to installing drivers for feature tests was the fact that different developers' local machines had different architectures. Therefore, for example, the versions of Chromium also differed on different machines: both arm64 and amd64 versions were installed, which directly affected the specs. Attempting to install a specific architecture during the driver installation process with

deb [arch=amd64 signed-by=/usr/share/keyrings/debian-archive-keyring.gpg] http://deb.debian.org/debian buster main
Enter fullscreen mode Exit fullscreen mode

was unsuccessful. The solution was found to specify a certain architecture in the docker-compose configuration

platform: linux/amd64
Enter fullscreen mode Exit fullscreen mode

and also configure Rosetta for Apple M1 chips:

softwareupdate --install-rosetta
Enter fullscreen mode Exit fullscreen mode

(and turn on Settings->General->"Use Rosetta for x86/amd64 emulation on Apple Silicon" in your Docker Desktop settings).

SSH

To enable SSH connection from the container to our servers, it was necessary to forward the SSH agent from the local machine (where, as assumed, all keys were already configured) into the container itself. We did this using two lines in the docker-compose configuration:

environment:
  SSH_AUTH_SOCK: /ssh-agent
volumes:
  - ${HOST_SSH_SOCKET_PATH}:/ssh-agent
Enter fullscreen mode Exit fullscreen mode

We use the environment variable HOST_SSH_SOCKET_PATH, which is set in the .env file because team members worked with different operating systems, and the SSH socket path may be different for them.

Debug

To enable local code debugging, we also used a fairly standard solution by adding the following configuration to the docker-compose:

tty: true
stdin_open: true
Enter fullscreen mode Exit fullscreen mode

for each container. This way, after adding breakpoints in the code, a developer could execute

docker attach <CONTAINER_NAME>
Enter fullscreen mode Exit fullscreen mode

from their local machine and enter the run process.

Improvements

During the configuration setup and adjustments, we accumulated some common commands and configurations in the respective Docker files. We extracted these common parts using a common build image for the Dockerfile and using the standard YAML anchors for the docker-compose.yml.

This is what the general configuration of our containers looks like: https://gist.github.com/IvanIvashchenko/eb43e502593eb4793808a03771fa6c33

In the future, we plan to modify the images for using containers on remote servers and configure the deployment of these containers through integration with GitHub/ECR.

Top comments (2)

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch

One question, Ivan. Streamlining the development environment with staging and production is reasonable, of course, but it looks more like an approach for VM's instead of containers, especially when considering this point:

Updating gems does not require rebuilding containers.

This means containers (or their images) aren't versioned with the full Ruby environment - instead the environment is being altered when running. I don't think it's that big of a deal since you're using a custom Ruby installation instead of Ubuntu's Ruby packages, so you keep it under your control. But still, I find it a bit weird to handle containers like VM's.

Collapse
 
ivanivashchenko profile image
Ivan Ivashchenko

We just have some number of libraries/scripts which potentially could have a different behavior or so on different platforms, that's why we decided to make our base image with the certain OS version.
In all other respects, this is the pretty standard image, almost the same as Dockerhub Ruby version, but only based on a different OS.