<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ivan Ivashchenko</title>
    <description>The latest articles on DEV Community by Ivan Ivashchenko (@ivanivashchenko).</description>
    <link>https://dev.to/ivanivashchenko</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ivanivashchenko"/>
    <language>en</language>
    <item>
      <title>Ruby on Rails containerization example</title>
      <dc:creator>Ivan Ivashchenko</dc:creator>
      <pubDate>Thu, 14 Mar 2024 17:02:41 +0000</pubDate>
      <link>https://dev.to/ivanivashchenko/ror-containerization-example-2fp</link>
      <guid>https://dev.to/ivanivashchenko/ror-containerization-example-2fp</guid>
      <description>&lt;p&gt;This article describes the process of containerizing a Ruby on Rails application to use in local development process. There are no innovative ideas here, just some specific requirements and issues we encountered, along with methods to address them. So, let's get started.&lt;/p&gt;

&lt;p&gt;The Project. Started over 10 years ago, it is a RoR monolith responsible for both backend and frontend (SSR). It contains a considerable number of background jobs for handling long processes. Additionally, it includes a couple of engines for separate parts of the system. It is fairly well covered by tests, including many feature tests using Capybara, which required various drivers to run. At the time of starting work on containerization, local project setup was available, including through virtualization on Vagrant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Updating gems does not require rebuilding containers.&lt;/li&gt;
&lt;li&gt;On all staging and production servers, we had a specific OS version - Ubuntu 22.04. Therefore, we wanted to reproduce this context in the container. However, this required additional configuration, as the official images with Ruby on &lt;a href="https://hub.docker.com/_/ruby"&gt;dockerhub&lt;/a&gt; were based on different versions of Debian OS.&lt;/li&gt;
&lt;li&gt;All existing feature tests have to be executed within the container.&lt;/li&gt;
&lt;li&gt;We had a number of Ruby scripts that performed specific tasks on the servers. For example, a script for conveniently reading logs from the server. It connects to the main server via SSH and then greps logs on each instance that received requests or processed background workers. Such scripts required a working context of our application, and we wanted them to be able to run directly from the container.&lt;/li&gt;
&lt;li&gt;Ability to debug the project/tests in the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The main steps
&lt;/h2&gt;

&lt;p&gt;Since our application handles requests through a web API and contains background process handlers, it makes sense to separate these two parts into individual containers. However, the main dependencies for these parts will be identical, allowing us to use a shared configuration in the Dockerfile. Furthermore, the job container fully replicates this base configuration. The web container does have additional dependencies for the frontend part, as well as for running feature tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM ubuntu:22.04 AS build

&amp;lt;Install all common libraries and deps&amp;gt;

FROM build as web

&amp;lt;Install web-specific dependencies&amp;gt;

FROM build as job

&amp;lt;Just set the CMD&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We used docker-compose to configure interaction between containers. In addition to the 2 images for our application, we also configured a container for the database and another small busybox container, which, along with a shared volume, served as a storage for our gems. This configuration allowed us to avoid rebuilding containers when adding/changing gems. Each time the services were started, they check the gems in the cache, install missing libraries, and then start the main process. For example, the command for the web container looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;command: bash -c "bundle check || bundle install &amp;amp;&amp;amp; rails s -b 0.0.0.0 -p 3003"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we wanted to have images based on Ubuntu 22.04, we had to work with dependencies required for the project at the OS level. We installed some standard libraries like &lt;em&gt;gnupg cmake g++ file&lt;/em&gt;, some tools necessary for installing other dependencies and local work within the container (&lt;em&gt;wget postgresql-client git&lt;/em&gt;), and several libraries related to the specific requirements of our system (for example, &lt;em&gt;imagemagick&lt;/em&gt; for working with images).&lt;/p&gt;

&lt;p&gt;We used Ruby version 3.2.2 at the time of creating our configuration. We downloaded and compiled it from source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ENV RUBY_MAJOR 3.2
ENV RUBY_VERSION 3.2.2

RUN wget -O ruby.tar.gz "https://cache.ruby-lang.org/pub/ruby/3.2/ruby-3.2.2.tar.gz"; \
    mkdir -p tmp/src/ruby; \
    tar -xzf ruby.tar.gz -C tmp/src/ruby --strip-components=1; \
    rm ruby.tar.gz; \
    cd tmp/src/ruby; \
    ./configure --disable-install-doc; \
    make; \
    make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Feature specs
&lt;/h2&gt;

&lt;p&gt;Since our system is SSR, integration specs also verify the functionality of the frontend part by emulating user actions on the page, checking the logic of request behavior, and the functioning of various frontend elements on the pages. At the time of containerization, we used 2 drivers to run different tests - Chromedriver and Firefox, both of which needed to be present in our web container. However, it turned out that the standard libraries available in the repository and the container itself based on Ubuntu 22 were not suitable for us. In other words, standard commands like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-get -y --no-install-recommends install firefox
apt-get -y install chromium-driver
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;installed the correponding packages but our specs didn't work with them. So, we had to customize the installation of these drivers too. The main idea - to use custom repositories as driver sources and choosing the certain lib version with &lt;em&gt;apt preferences&lt;/em&gt;. Taking Firefox as an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN apt-get -y install software-properties-common; \
    add-apt-repository -y ppa:mozillateam/ppa
RUN echo $' \n\
Package: *\n\
Pin: release o=LP-PPA-mozillateam\n\
Pin-Priority: 1001' | tee /etc/apt/preferences.d/mozilla-firefox
RUN apt-get -y install firefox
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another problem related to installing drivers for feature tests was the fact that different developers' local machines had different architectures. Therefore, for example, the versions of Chromium also differed on different machines: both arm64 and amd64 versions were installed, which directly affected the specs. Attempting to install a specific architecture during the driver installation process with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deb [arch=amd64 signed-by=/usr/share/keyrings/debian-archive-keyring.gpg] http://deb.debian.org/debian buster main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;was unsuccessful. The solution was found to specify a certain architecture in the docker-compose configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;platform: linux/amd64
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and also configure Rosetta for Apple M1 chips:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;softwareupdate --install-rosetta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(and turn on Settings-&amp;gt;General-&amp;gt;"Use Rosetta for x86/amd64 emulation on Apple Silicon" in your Docker Desktop settings).&lt;/p&gt;

&lt;h2&gt;
  
  
  SSH
&lt;/h2&gt;

&lt;p&gt;To enable SSH connection from the container to our servers, it was necessary to forward the SSH agent from the local machine (where, as assumed, all keys were already configured) into the container itself. We did this using two lines in the docker-compose configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;environment:
  SSH_AUTH_SOCK: /ssh-agent
volumes:
  - ${HOST_SSH_SOCKET_PATH}:/ssh-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We use the environment variable &lt;em&gt;HOST_SSH_SOCKET_PATH&lt;/em&gt;, which is set in the &lt;em&gt;.env&lt;/em&gt; file because team members worked with different operating systems, and the SSH socket path may be different for them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debug
&lt;/h2&gt;

&lt;p&gt;To enable local code debugging, we also used a fairly standard solution by adding the following configuration to the docker-compose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tty: true
stdin_open: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;for each container. This way, after adding breakpoints in the code, a developer could execute&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker attach &amp;lt;CONTAINER_NAME&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;from their local machine and enter the run process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements
&lt;/h2&gt;

&lt;p&gt;During the configuration setup and adjustments, we accumulated some common commands and configurations in the respective Docker files. We extracted these common parts using a common build image for the Dockerfile and using the standard YAML anchors for the docker-compose.yml.&lt;/p&gt;

&lt;p&gt;This is what the general configuration of our containers looks like: &lt;a href="https://gist.github.com/IvanIvashchenko/eb43e502593eb4793808a03771fa6c33"&gt;https://gist.github.com/IvanIvashchenko/eb43e502593eb4793808a03771fa6c33&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the future, we plan to modify the images for using containers on remote servers and configure the deployment of these containers through integration with GitHub/ECR.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>docker</category>
      <category>ruby</category>
    </item>
    <item>
      <title>Rswag validation checks</title>
      <dc:creator>Ivan Ivashchenko</dc:creator>
      <pubDate>Thu, 15 Jun 2023 13:40:12 +0000</pubDate>
      <link>https://dev.to/ivanivashchenko/rswag-validation-checks-4no3</link>
      <guid>https://dev.to/ivanivashchenko/rswag-validation-checks-4no3</guid>
      <description>&lt;p&gt;Many developers know how important it's to use documentation for the API and keep it up to date. In our projects when we have a REST API we're using Swagger as it's very helpful tool for the integration process of front-end part of the system with the back-end. There is a great gem for the Ruby on Rails projects which allows to have Swagger docs with pretty standard UI  - &lt;a href="https://github.com/rswag/rswag"&gt;Rswag&lt;/a&gt;. Besides the API docs it also provides test functionality to implement some kind of integration specs for your endpoints.&lt;/p&gt;

&lt;p&gt;But there is a problem which may occur from time to time - documentation is getting outdated. This could be related with the case when developer didn't update the source with the Swagger specs after some updates in endpoints or when person just forgot to run &lt;code&gt;rake rswag:specs:swaggerize&lt;/code&gt;. This command is used to re-generate the schema - JSON or YAML file, which is used to render the interface with the documentation. &lt;/p&gt;

&lt;p&gt;There are couple of approaches to solve the latter issue. Firstly, we could just add the &lt;code&gt;swagger.yaml&lt;/code&gt; file to the &lt;code&gt;.gitignore&lt;/code&gt; and then set up our CI job to generate the schema file before the deploy. This could be useful if your project doesn't have a huge number of endpoints and there is no need to have actual Swagger UI during the local development process. Also you shouldn't re-run swaggerize command after each merge with some conflicts in Swagger schema.&lt;/p&gt;

&lt;p&gt;But if you prefer to keep &lt;code&gt;swagger.yaml&lt;/code&gt; in the repository, then it should be rebuilt after each update of the corresponding swagger DSL. In such case we could use some simple shell command which will compare the old schema and a generated one from the new sources as a CI check during pull request validation.&lt;br&gt;
Couple of examples for such commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runs:
  steps:
    ...
    - shell: bash 
      run: cp &amp;lt;path_to_swagger.yaml_in_your_project&amp;gt; swagger.yaml
    - shell: bash
      run: |
        bundle exec rails rswag:specs:swaggerize
        diff &amp;lt;path_to_swagger.yaml_in_your_project&amp;gt; swagger.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It uses simple &lt;code&gt;diff&lt;/code&gt; Linux command which displays the differences in the files by comparing the files line by line. Here we compare the old schema with newly generated file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;runs:
  steps:
    ...
    - shell: bash 
      run: md5sum &amp;lt;path_to_swagger.yaml_in_your_project&amp;gt; &amp;gt; swagger-docs.md5
    - shell: bash
      run: |
        bundle exec rails rswag:specs:swaggerize
        md5sum -c swagger-docs.md5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we use &lt;code&gt;md5sum&lt;/code&gt; command, which firstly generates checksum for the existing YAML schema and then compares this value with the newly generated swagger.yaml&lt;/p&gt;

</description>
      <category>rails</category>
      <category>ci</category>
      <category>rest</category>
      <category>documentation</category>
    </item>
  </channel>
</rss>
