<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Artur Trzop</title>
    <description>The latest articles on DEV Community by Artur Trzop (@arturt).</description>
    <link>https://dev.to/arturt</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/arturt"/>
    <language>en</language>
    <item>
      <title>DB max connection limits for Rails app and Postgres, Redis, Puma settings</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Tue, 11 May 2021 14:04:15 +0000</pubDate>
      <link>https://dev.to/arturt/db-max-connection-limits-for-rails-app-and-postgres-redis-puma-settings-427h</link>
      <guid>https://dev.to/arturt/db-max-connection-limits-for-rails-app-and-postgres-redis-puma-settings-427h</guid>
      <description>&lt;p&gt;Configuring the database connections pool for the Rails app might not be a straightforward task for many programmers. There is a constraint of max opened connections on a database level. Your server environment configuration can change in time and affect the number of connections to the database required. For instance number of servers you use can change when you autoscale it based on the web traffic. It means that the number of web processes/threads running for Puma or Unicorn servers could change. All this adds additional complexity. When you use two databases (e.g. Postgres + Redis), everything gets more complex. In this article, we will address that. You will learn how to estimate needed database connections for your Ruby on Rails production application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do available database connections matter?
&lt;/h2&gt;

&lt;p&gt;The first question is, why do you need to care about available database connections? The answer is simple. Suppose you configured your Ruby application to open too many DB connections. In that case, it could happen that you will get &lt;code&gt;ActiveRecord::ConnectionTimeoutError&lt;/code&gt; exceptions from the application when the database cannot handle more new connections from your Rails app. It can result in 500 errors visible for your web app users.&lt;/p&gt;

&lt;p&gt;This problem might not be apparent immediately. Often you will find out about it in production. Your application might work just fine until specific circumstances cause the Rails app to need more DB connections, which can trigger exception flood. Let's see how to avoid it.&lt;/p&gt;

&lt;h2&gt;
  
  
  RoR application configuration step by step
&lt;/h2&gt;

&lt;p&gt;Let's break a typical Ruby on Rails application down into smaller components that use databases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have a Rails application that uses the Postgres database for ActiveRecord usage.&lt;/li&gt;
&lt;li&gt;We also use the Redis database for background workers like Sidekiq.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It looks simple, isn't it? Let's start with that, and later on, we will add more complexity to the mix :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Postgres database connections - how to check the limit?
&lt;/h2&gt;

&lt;p&gt;How to check how many available connections do you have for Postgres?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you use a dedicated server with Postgres installed, then most likely you have a default &lt;code&gt;max_connections&lt;/code&gt; which is typically 100 connections.&lt;/li&gt;
&lt;li&gt;If you use a Postgres instance on the AWS, then you need to check the AWS documentation to find out what's the max allowed connections to your database instance (it depends on if you use Amazon RDS or Aurora and what is server instance class)&lt;/li&gt;
&lt;li&gt;If you use Heroku, you can check the &lt;code&gt;Connection Limit&lt;/code&gt; for the &lt;a href="https://elements.heroku.com/addons/heroku-postgresql#pricing" rel="noopener noreferrer"&gt;Postgres Heroku add-on&lt;/a&gt; to check max acceptable connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ActiveRecord connection pool
&lt;/h2&gt;

&lt;p&gt;In your Rails application, the &lt;code&gt;config/database.yml&lt;/code&gt; file contains the &lt;code&gt;pool&lt;/code&gt; option. As explained in the &lt;a href="https://edgeguides.rubyonrails.org/configuring.html#database-pooling" rel="noopener noreferrer"&gt;Rails docs&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Active Record database connections are managed by &lt;code&gt;ActiveRecord::ConnectionAdapters::ConnectionPool&lt;/code&gt;, which ensures that a connection pool synchronizes the amount of thread access to a limited number of database connections.&lt;/p&gt;

&lt;p&gt;Since the connection pooling is handled inside of Active Record by default, all application servers (Thin, Puma, Unicorn, etc.) should behave the same. The database connection pool is initially empty. As demand for connections increases, it will create them until it reaches the connection pool limit.&lt;/p&gt;

&lt;p&gt;Any one request will check out a connection the first time it requires access to the database. At the end of the request, it will check the connection back in. This means that the additional connection slot will be available again for the next request in the queue.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;pool&lt;/code&gt; can be defined this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;adapter&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blog_production&lt;/span&gt;
  &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or as a part of a URL to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;development&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql://localhost/blog_production?pool=5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The URL option is popular when you host a database on an external server like Amazon RDS. Then you could define the URL this way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres://blog_production:PASSWORD@blog-production.abcdefgh.eu-west-1.rds.amazonaws.com/blog_production?sslca=config/rds-combined-ca-bundle.pem&amp;amp;pool=5&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that for the production, you should not commit credentials in the &lt;code&gt;config/database.yml&lt;/code&gt; file. Instead, store it in environment variables and then read the value at your Rails app's runtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;production&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;%= ENV['DB_URL'] %&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How does ActiveRecord connection pool affects Postgres max connections?
&lt;/h2&gt;

&lt;p&gt;Let's start with a simple example. Your application may use one of the application servers like Puma or Unicorn. Let's focus on Puma because it's more complex as it has a separate configuration for several processes (known as workers in Puma terms) and threads. Unicorn runs in a single thread only. It works exactly like Puma with a single thread setting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanyygqvyacwzatn3tfhb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fanyygqvyacwzatn3tfhb.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Puma config: 1 process and 1 thread
&lt;/h3&gt;

&lt;p&gt;Let's say you use the Puma server to run the Rails application. The Puma is configured to run 1 process (worker) and it has only 1 thread.&lt;/p&gt;

&lt;p&gt;The puma process can open up to 5 connections to the database because the &lt;code&gt;pool&lt;/code&gt; option is defined as 5 in &lt;code&gt;config/database.yml&lt;/code&gt;. Typically, there are fewer connections than that because when you run 1 process and only 1 thread, only 1 connection to the Postgres database will be needed to make a database query.&lt;/p&gt;

&lt;p&gt;Sometimes the database connection might be dead. In such a case, ActiveRecord can open a new connection, and then you may end up with 2 active connections. In the worst-case scenario when 4 connections would be dead, then Rails can open 5 connections max.&lt;/p&gt;

&lt;h3&gt;
  
  
  Puma config: 1 process and 2 threads
&lt;/h3&gt;

&lt;p&gt;If you use 2 threads in a single Puma process (worker) then it means those 2 threads can use the same pool of DB connections within the Puma process.&lt;/p&gt;

&lt;p&gt;It means that 2 DB connections will be open out of 5 possible. If any connection is dead, then more connections can be opened until the 5 connection pool limit is reached.&lt;/p&gt;

&lt;h3&gt;
  
  
  Puma config: 2 processes and 2 threads per process
&lt;/h3&gt;

&lt;p&gt;If you run 2 Puma processes (workers) and each process has 2 threads then it means that each single process will open 2 DB connections because you have 2 threads per process. You have 2 processes so it means at the start of your application, there might be 4 DB connections open. Each process has its pool, so you have 2 pools. Each pool can open up to 5 DB connections. It means that in the worst-case scenario, there can be even 10 connections created to the database.&lt;/p&gt;

&lt;p&gt;Assuming you use 2 threads per Puma process, it's good to have the &lt;code&gt;pool&lt;/code&gt; option set to 2 + some spare connections. It allows ActiveRecord to open a new connection if one of the DB connections is dead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Puma config: 2 processes and 2 threads, and 2 web dynos on Heroku
&lt;/h3&gt;

&lt;p&gt;If you use Heroku to host your application, it allows scaling your web application horizontally by adding more servers (dynos). Assume you run your application on 2 servers (2 Heroku dynos), each dyno is running 2 Puma processes, and each process has 2 threads. It means at the start, your application may open 6 connections to the database. Here is why:&lt;/p&gt;

&lt;p&gt;2 dynos X 2 Puma processes X 2 Puma threads = 6 DB connections&lt;/p&gt;

&lt;p&gt;2 dynos X 2 Puma process X Pool size (5) = Total pool size 20&lt;/p&gt;

&lt;p&gt;It means that in the worst-case your application may open 20 DB connections.&lt;/p&gt;

&lt;h4&gt;
  
  
  Autoscaling web application
&lt;/h4&gt;

&lt;p&gt;If you autoscale your web servers by adding more servers during the peak web traffic, you need to be careful. Ensure your application stays within the Postgres max connections limit. The above example shows you how to calculate expected opened DB connections and the worst-case scenario.  Please adjust your pool size to ensure that you will be below the max connections limit for your database engine in the worst-case scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  What else can open DB connections?
&lt;/h2&gt;

&lt;p&gt;We just talked about a webserver like Puma that can open connections and consume your max DB connections limit. But other non-web processes can do it as well:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You run Rails console on production in a Heroku dyno &lt;code&gt;heroku run bin/rails console --app=my-app-name&lt;/code&gt;. It runs an instance of your Rails app, and 1 DB connection will be open. In the worst-case scenario, the number of connections defined in the &lt;code&gt;pool&lt;/code&gt; can be opened. But it's unlikely that your DB connections would go dead. So the whole pool limit shouldn't be used.&lt;/li&gt;
&lt;li&gt;You run scheduled rake tasks via Heroku Scheduler (cron-like tool). If the rake tasks are performed periodically, they need to open a connection to the DB so that at least 1 DB connection is used from the pool per rake task. Imagine you have 10 rake tasks that are started every hour. It means you need 10 available DB connections every hour. It can be easy to miss this if you base your estimation on just the web connections.&lt;/li&gt;
&lt;li&gt;You use background workers like Sidekiq to perform async jobs. Your jobs may open DB connections. We will talk about it later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Background worker - Sidekiq and ActiveRecord pool
&lt;/h2&gt;

&lt;p&gt;Sidekiq process will use the pool defined in &lt;code&gt;config/database.yml&lt;/code&gt; similarly as Puma. All Sidekiq threads in a Sidekiq process can use a common pool of connections.&lt;/p&gt;

&lt;p&gt;If you run multiple servers (Heroku dynos), then it works similarly to the Puma example.&lt;/p&gt;

&lt;p&gt;2 servers (dynos) X 1 Sidekiq process X 10 Sidekiq threads = 20 DB connections will be open.&lt;/p&gt;

&lt;p&gt;You need to have a pool size of at least 10 in &lt;code&gt;config/database.yml&lt;/code&gt; because Sidekiq by default, uses 10 threads.&lt;/p&gt;

&lt;p&gt;If you use a pool size lower than 10 then Sidekiq threads will be fighting for access to limited connections in the pool. It could be fine for some time, but you should be aware that this can increase your job's processing time because not all Sidekiq threads will use DB connections in parallel. It can also lead to &lt;a href="https://github.com/mperham/sidekiq/wiki/Problems-and-Troubleshooting#cannot-get-database-connection-within-500-seconds" rel="noopener noreferrer"&gt;a problem described here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sidekiq and Redis database connections
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ft3c0g5s4g2cljxkryr.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ft3c0g5s4g2cljxkryr.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sidekiq uses the Redis database to store async jobs. It would be best if you calculate DB connections to Redis as well as Postgres connections. A Sidekiq server process requires at least (concurrency + 5) connections. The &lt;code&gt;concurrency&lt;/code&gt; option is the number of Sidekiq threads per Sidekiq process.&lt;/p&gt;

&lt;p&gt;Using the previous example:&lt;/p&gt;

&lt;p&gt;2 servers (dynos) X 1 Sidekiq process X 10 Sidekiq threads = 2 servers (dynos) X 1 Sidekiq process X (10 + 5) = 30 Redis connections required.&lt;/p&gt;

&lt;p&gt;More in &lt;a href="https://github.com/mperham/sidekiq/wiki/Using-Redis#complete-control" rel="noopener noreferrer"&gt;Sidekiq docs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Redis database connections
&lt;/h2&gt;

&lt;p&gt;If you use Redis for processing background jobs, then it's not just the Sidekiq process that is using Redis connections. Your Puma process and threads can use Redis to add new jobs to the Sidekiq queue as well. Typically you will have 1 Redis connection per 1 Puma thread.&lt;/p&gt;

&lt;p&gt;If you explicitly open a new Redis connection with &lt;code&gt;Redis.new&lt;/code&gt;, this can create a new connection per the Puma thread as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;We covered a few examples of Postgres and Redis on calculating DB connections needed by your Rails application. I hope this will give you a better understanding of how to estimate how many DB connections you need on your database level to serve your application's demands properly.&lt;/p&gt;

&lt;p&gt;If you are looking to improve your Rails application workflow please consider checking how to &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation" rel="noopener noreferrer"&gt;run automated tests in parallel on your CI server&lt;/a&gt; with &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-estimate-database-connections-pool-size-for-rails-application" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>database</category>
      <category>redis</category>
      <category>postgres</category>
      <category>rails</category>
    </item>
    <item>
      <title>BitBucket parallel Cypress tests configuration for CI pipeline integration</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Thu, 29 Apr 2021 09:36:42 +0000</pubDate>
      <link>https://dev.to/arturt/bitbucket-parallel-cypress-tests-configuration-for-ci-pipeline-integration-13m5</link>
      <guid>https://dev.to/arturt/bitbucket-parallel-cypress-tests-configuration-for-ci-pipeline-integration-13m5</guid>
      <description>&lt;p&gt;Do you use BitBucket Pipeline as your CI server? Are you struggling with slow E2E tests in Cypress? Did you know BitBucket Pipeline can run parallel steps? You can use it to distribute your browser tests across several parallel steps to execute end-to-end Cypress tests in a short amount of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to run tests in parallel
&lt;/h2&gt;

&lt;p&gt;Distributing tests across parallel steps to spread the workload and run tests faster might be more challenging than you think. The question is how to divide Cypress test files across the parallel jobs in order to ensure the work is distributed evenly? But... is distributing work evenly what you actually want?&lt;/p&gt;

&lt;p&gt;To get the shortest CI build time you want to utilize the available CI resources to the fullest. You want to avoid wasting time. This means that you want to ensure the parallel steps will finish work at a similar time as this would mean there are no bottlenecks in CI machines utilization.&lt;/p&gt;

&lt;p&gt;Many things are unknown and unpredictable. This can affect how long it will take to execute tests on BitBucket Pipeline. There are things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;boot time - time spent on loading your CI docker container&lt;/li&gt;
&lt;li&gt;loading npm/yarn dependencies from cache&lt;/li&gt;
&lt;li&gt;running Cypress tests&lt;/li&gt;
&lt;li&gt;tests can run against different browsers and this can affect how long the tests are executed&lt;/li&gt;
&lt;li&gt;sometimes tests can fail and their execution time is different&lt;/li&gt;
&lt;li&gt;other times you may have &lt;a href="https://docs.knapsackpro.com/2021/fix-intermittently-failing-ci-builds-flaky-tests-rspec"&gt;flaky tests randomly failing&lt;/a&gt; and you could use Test Retries in Cypress to automatically rerun failed test cases. This results in running a test file for longer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of the above contribute to the uncertainty around execution time. It's hard to know how best to divide test files across the parallel steps to ensure the steps complete work at a similar time. But there is a solution to that - a dynamic test suite split during runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Queue Mode - a dynamic tests split
&lt;/h2&gt;

&lt;p&gt;To distribute tests work across BitBucket Pipeline parallel steps you can use &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-bitbucket-pipeline-with-parallel-cypress-tests-can-speed-up-ci-build"&gt;Knapsack Pro&lt;/a&gt; with a Queue Mode. You can use &lt;a href="https://github.com/KnapsackPro/knapsack-pro-cypress#knapsack-procypress"&gt;&lt;code&gt;@knapsack-pro/cypress&lt;/code&gt; npm package&lt;/a&gt; that will generate a Queue with a list of test files on the Knapsack Pro API side and then all parallel steps can connect to the queue to consume test files and execute them. This way parallel steps ask for more tests only after they finish executing a set of tests previously fetched from the Knapsack Pro API. You can learn about the &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation"&gt;details of Queue Mode from the article&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  BitBucket Pipeline YML config
&lt;/h2&gt;

&lt;p&gt;Here is an example of a BitBucket Pipeline config in YML. As you can see, there are 3 parallel steps to run Cypress tests via &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-bitbucket-pipeline-with-parallel-cypress-tests-can-speed-up-ci-build"&gt;Knapsack Pro&lt;/a&gt;. If you would like to run your tests on more parallel jobs you simply need to add more steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cypress/base:10&lt;/span&gt;
&lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;max-time&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;

&lt;span class="c1"&gt;# job definition for running E2E tests in parallel with KnapsackPro.com&lt;/span&gt;
&lt;span class="na"&gt;e2e&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;e2e&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run E2E tests with @knapsack-pro/cypress&lt;/span&gt;
  &lt;span class="na"&gt;caches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cypress&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# run web application in the background&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm run start:ci &amp;amp;&lt;/span&gt;
    &lt;span class="c1"&gt;# env vars from https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export KNAPSACK_PRO_CI_NODE_BUILD_ID=$BITBUCKET_BUILD_NUMBER&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export KNAPSACK_PRO_COMMIT_HASH=$BITBUCKET_COMMIT&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export KNAPSACK_PRO_BRANCH=$BITBUCKET_BRANCH&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export KNAPSACK_PRO_CI_NODE_TOTAL=$BITBUCKET_PARALLEL_STEP&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export KNAPSACK_PRO_CI_NODE_INDEX=$BITBUCKET_PARALLEL_STEP_COUNT&lt;/span&gt;
    &lt;span class="c1"&gt;# https://github.com/KnapsackPro/knapsack-pro-cypress#configuration-steps&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export KNAPSACK_PRO_FIXED_QUEUE_SPLIT=true&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;$(npm bin)/knapsack-pro-cypress&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# store any generated images and videos as artifacts&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cypress/screenshots/**&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cypress/videos/**&lt;/span&gt;

&lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install dependencies&lt;/span&gt;
      &lt;span class="na"&gt;caches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cypress&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node&lt;/span&gt;
      &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;parallel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# run N steps in parallel&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*e2e&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*e2e&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*e2e&lt;/span&gt;

&lt;span class="na"&gt;definitions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;caches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;npm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$HOME/.npm&lt;/span&gt;
    &lt;span class="na"&gt;cypress&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$HOME/.cache/Cypress&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are looking for an example with a custom docker container for a parallel step please see &lt;a href="https://gist.github.com/ArturT/90b7ec869e3827b580664beb086a8cd6"&gt;this one&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please remember to add your API token in the &lt;code&gt;KNAPSACK_PRO_TEST_SUITE_TOKEN_CYPRESS&lt;/code&gt; environment variable as a &lt;a href="https://support.atlassian.com/bitbucket-cloud/docs/variables-and-secrets/"&gt;secure variable&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;BitBucket Pipeline is a CI server that allows running scripts in parallel. You can use parallel steps to distribute your Cypress tests with &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-bitbucket-pipeline-with-parallel-cypress-tests-can-speed-up-ci-build"&gt;Knapsack Pro&lt;/a&gt; to save time and run CI build as fast as possible.&lt;/p&gt;

</description>
      <category>cypress</category>
      <category>bitbucket</category>
      <category>testing</category>
      <category>e2e</category>
    </item>
    <item>
      <title>Testing Ruby on Rails on Github Actions with RSpec</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Mon, 19 Apr 2021 11:40:10 +0000</pubDate>
      <link>https://dev.to/arturt/testing-ruby-on-rails-on-github-actions-with-rspec-3ina</link>
      <guid>https://dev.to/arturt/testing-ruby-on-rails-on-github-actions-with-rspec-3ina</guid>
      <description>&lt;p&gt;Are you thinking about migrating a Ruby on Rails project CI pipeline to Github Actions? You will learn how to configure the Rails app to run RSpec tests using Github Actions.&lt;/p&gt;

&lt;p&gt;This article covers a few things for Github Actions YAML config:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to use &lt;code&gt;ruby/setup-ruby&lt;/code&gt; action to install Ruby gems with bundler and automatically cache gems. This way you can load Ruby gems for your project from the cache and run CI build fast.&lt;/li&gt;
&lt;li&gt;how to use Postgres on Github Actions&lt;/li&gt;
&lt;li&gt;how to use Redis on Github Actions&lt;/li&gt;
&lt;li&gt;how to use Github Actions build matrix to run parallel jobs and execute RSpec tests spread across multiple jobs to save time&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Github Actions YML config for Rails application
&lt;/h2&gt;

&lt;h3&gt;
  
  
  ruby/setup-ruby action
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/ruby/setup-ruby"&gt;ruby/setup-ruby&lt;/a&gt; is an action that you can use to install a particular Ruby programming language version. It allows you to cache Ruby gems based on your &lt;code&gt;Gemfile.lock&lt;/code&gt; out of the box.&lt;/p&gt;

&lt;p&gt;It's recommended to &lt;a href="https://docs.knapsackpro.com/2021/how-to-load-ruby-gems-from-cache-on-github-actions"&gt;use &lt;code&gt;ruby/setup-ruby&lt;/code&gt; instead of outdated &lt;code&gt;actions/setup-ruby&lt;/code&gt;&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Ruby&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ruby/setup-ruby@v1&lt;/span&gt;
  &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Not needed with a .ruby-version file&lt;/span&gt;
    &lt;span class="na"&gt;ruby-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.7&lt;/span&gt;
    &lt;span class="c1"&gt;# runs 'bundle install' and caches installed gems automatically&lt;/span&gt;
    &lt;span class="na"&gt;bundler-cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How to configure Postgres on Github Actions
&lt;/h3&gt;

&lt;p&gt;To use Postgres on Github Actions you need to set up a service for Postgres. I recommend using additional options that will configure Postgres to use RAM instead of disk. This way your database can run faster in a testing environment.&lt;/p&gt;

&lt;p&gt;In the config below, we also pass the settings for doing a health check to ensure the database is up and running before you start running tests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# If you need DB like PostgreSQL, Redis then define service below.&lt;/span&gt;
&lt;span class="c1"&gt;# https://github.com/actions/example-services/tree/master/.github/workflows&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:10.8&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
      &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;5432:5432&lt;/span&gt;
    &lt;span class="c1"&gt;# needed because the postgres container does not provide a healthcheck&lt;/span&gt;
    &lt;span class="c1"&gt;# tmpfs makes DB faster by using RAM&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
      &lt;span class="s"&gt;--mount type=tmpfs,destination=/var/lib/postgresql/data&lt;/span&gt;
      &lt;span class="s"&gt;--health-cmd pg_isready&lt;/span&gt;
      &lt;span class="s"&gt;--health-interval 10s&lt;/span&gt;
      &lt;span class="s"&gt;--health-timeout 5s&lt;/span&gt;
      &lt;span class="s"&gt;--health-retries 5&lt;/span&gt;
&lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="err"&gt;%&lt;/span&gt; &lt;span class="nv"&gt;endhighlight %&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;### How to configure Redis on Github Actions&lt;/span&gt;

&lt;span class="s"&gt;You can use Redis Docker container to start Redis server on Github Actions. See how simple it is&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

&lt;span class="pi"&gt;{&lt;/span&gt;&lt;span class="err"&gt;%&lt;/span&gt; &lt;span class="nv"&gt;highlight yml %&lt;/span&gt;&lt;span class="pi"&gt;}&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;6379:6379&lt;/span&gt;
    &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--entrypoint redis-server&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How to use Github Actions build matrix to run tests with parallel jobs
&lt;/h3&gt;

&lt;p&gt;You can use the &lt;a href="https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix"&gt;build matrix&lt;/a&gt; in Github Actions to run multiple jobs at the same time.&lt;/p&gt;

&lt;p&gt;You will need to split test files between these parallel jobs. For that, you can use Knapsack Pro with &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation"&gt;Queue Mode to distribute tests evenly between the jobs&lt;/a&gt;. This way you can ensure the proper amount of tests is executed on each job and the workload is well balanced between the jobs. Simply speaking this way you can make sure the CI build is as fast as possible - it has optimal execution time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
  &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC }}&lt;/span&gt;
    &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_TOTAL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_total }}&lt;/span&gt;
    &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_INDEX&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_index }}&lt;/span&gt;
    &lt;span class="na"&gt;KNAPSACK_PRO_FIXED_QUEUE_SPLIT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;KNAPSACK_PRO_LOG_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
  &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;bundle exec rake knapsack_pro:queue:rspec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see that for RSpec we also use a &lt;code&gt;knapsack_pro&lt;/code&gt; Ruby gem flag &lt;code&gt;KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES&lt;/code&gt;. It allows to automatically &lt;a href="https://knapsackpro.com/faq/question/how-to-split-slow-rspec-test-files-by-test-examples-by-individual-it?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-to-run-ruby-on-rails-tests-on-github-actions-using-rspec"&gt;detect slow test files and split them between parallel jobs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can learn more about it in a separate article explaining &lt;a href="https://docs.knapsackpro.com/2020/how-to-run-slow-rspec-files-on-github-actions-with-parallel-jobs-by-doing-an-auto-split-of-the-spec-file-by-test-examples"&gt;how to run slow RSpec files on Github Actions with parallel jobs by doing an auto split of the spec file by test examples&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Full YML config for Github Actions and Ruby on Rails project
&lt;/h2&gt;

&lt;p&gt;Here is the full configuration of the CI pipeline for Github Actions. You can use it to run tests for your Rails project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Main&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="c1"&gt;# If you need DB like PostgreSQL, Redis then define service below.&lt;/span&gt;
    &lt;span class="c1"&gt;# https://github.com/actions/example-services/tree/master/.github/workflows&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:10.8&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;5432:5432&lt;/span&gt;
        &lt;span class="c1"&gt;# needed because the postgres container does not provide a healthcheck&lt;/span&gt;
        &lt;span class="c1"&gt;# tmpfs makes DB faster by using RAM&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
          &lt;span class="s"&gt;--mount type=tmpfs,destination=/var/lib/postgresql/data&lt;/span&gt;
          &lt;span class="s"&gt;--health-cmd pg_isready&lt;/span&gt;
          &lt;span class="s"&gt;--health-interval 10s&lt;/span&gt;
          &lt;span class="s"&gt;--health-timeout 5s&lt;/span&gt;
          &lt;span class="s"&gt;--health-retries 5&lt;/span&gt;
      &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;6379:6379&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--entrypoint redis-server&lt;/span&gt;

    &lt;span class="c1"&gt;# https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idstrategymatrix&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;fail-fast&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# [n] - where the n is a number of parallel jobs you want to run your tests on.&lt;/span&gt;
        &lt;span class="c1"&gt;# Use a higher number if you have slow tests to split them between more parallel jobs.&lt;/span&gt;
        &lt;span class="c1"&gt;# Remember to update the value of the `ci_node_index` below to (0..n-1).&lt;/span&gt;
        &lt;span class="na"&gt;ci_node_total&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;8&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="c1"&gt;# Indexes for parallel jobs (starting from zero).&lt;/span&gt;
        &lt;span class="c1"&gt;# E.g. use [0, 1] for 2 parallel jobs, [0, 1, 2] for 3 parallel jobs, etc.&lt;/span&gt;
        &lt;span class="na"&gt;ci_node_index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;5&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;6&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
      &lt;span class="na"&gt;GEMFILE_RUBY_VERSION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2.7.2&lt;/span&gt;
      &lt;span class="na"&gt;PGHOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;PGUSER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="c1"&gt;# Rails verifies the time zone in DB is the same as the time zone of the Rails app&lt;/span&gt;
      &lt;span class="na"&gt;TZ&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Europe/Warsaw"&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Ruby&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ruby/setup-ruby@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# Not needed with a .ruby-version file&lt;/span&gt;
          &lt;span class="na"&gt;ruby-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.7&lt;/span&gt;
          &lt;span class="c1"&gt;# runs 'bundle install' and caches installed gems automatically&lt;/span&gt;
          &lt;span class="na"&gt;bundler-cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create DB&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bin/rails db:prepare&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_TOTAL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_total }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_INDEX&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_index }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_LOG_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
          &lt;span class="c1"&gt;# if you use Knapsack Pro Queue Mode you must set below env variable&lt;/span&gt;
          &lt;span class="c1"&gt;# to be able to retry CI build and run previously recorded tests&lt;/span&gt;
          &lt;span class="c1"&gt;# https://github.com/KnapsackPro/knapsack_pro-ruby#knapsack_pro_fixed_queue_split-remember-queue-split-on-retry-ci-node&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_FIXED_QUEUE_SPLIT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
          &lt;span class="c1"&gt;# RSpec split test files by test examples feature - it's optional&lt;/span&gt;
          &lt;span class="c1"&gt;# https://knapsackpro.com/faq/question/how-to-split-slow-rspec-test-files-by-test-examples-by-individual-it&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bundle exec rake knapsack_pro:queue:rspec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You've just learned how to set up your Rails application on Github Actions. I hope this will help you if you migrate your project from a different CI server to Github Actions.&lt;/p&gt;

&lt;p&gt;You can learn more about &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-to-run-ruby-on-rails-tests-on-github-actions-using-rspec"&gt;Knapsack Pro&lt;/a&gt; and how it can help you run tests fast using parallel jobs on CI. It works with RSpec, Cucumber, Minitest, and other Ruby test runners. &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-to-run-ruby-on-rails-tests-on-github-actions-using-rspec"&gt;Knapsack Pro&lt;/a&gt; can also work with JavaScript test runners and has a native API integration.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>github</category>
      <category>testing</category>
    </item>
    <item>
      <title>How to run Minitest parallel tests on Github Actions</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Tue, 13 Apr 2021 10:31:17 +0000</pubDate>
      <link>https://dev.to/arturt/how-to-run-minitest-parallel-tests-on-github-actions-bmd</link>
      <guid>https://dev.to/arturt/how-to-run-minitest-parallel-tests-on-github-actions-bmd</guid>
      <description>&lt;p&gt;How to run Ruby on Rails tests in Minitest on Github Actions? What to do if tests are slow? How to manage complex workflows? You can use Github Actions build matrices to divide Minitest files between jobs and run the test suite much faster.&lt;/p&gt;

&lt;p&gt;If your Minitest tests are taking dozens of minutes and you would like to save some time for your Ruby engineering team then you could use tests parallelization on your CI server.&lt;/p&gt;

&lt;p&gt;To run tests as fast as possible you need to split them into equal buckets (into parallel jobs). But how to do it? Some of the test files can be super fast to execute, other Minitest files can take minutes if they run system tests (E2E tests).&lt;/p&gt;

&lt;p&gt;There is also an aspect of preparing the test environment for each parallel job. By preparing I mean you need to clone a repository, install ruby gems or load them from a cache, maybe you need to load some docker container, etc. This can take various amounts of time on each parallel job. Random network errors happen like network delay to &lt;a href="https://docs.knapsackpro.com/2021/how-to-load-ruby-gems-from-cache-on-github-actions" rel="noopener noreferrer"&gt;load cached gems&lt;/a&gt;, or maybe Github Actions from time to time will start one of your jobs late compared to others. It's an inevitable issue in the network environment and can cause your tests to run for a different amount of time on each parallel job. This is visible on the graph below and it causes the CI build to be slower.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxil7ukn17xzwhgyhabhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxil7ukn17xzwhgyhabhq.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a perfect scenario you would like to cover all these problems and no matter what still be able to split Minitest work in parallel jobs in a way that ensures the tests on each parallel job completes at a similar time. This guarantees no bottlenecks. The perfect tests split is on the below graph.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3999r9t42g9b7wnzpvt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3999r9t42g9b7wnzpvt0.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Split tests in a dynamic way with Queue Mode
&lt;/h2&gt;

&lt;p&gt;You can use &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-run-minitest-on-github-actions-with-parallel-jobs-using-build-matrix" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt; Queue Mode to split tests in a dynamic way between parallel jobs. This way each job consumes tests from a queue until the queue is empty. Simply speaking this allows you to utilize your CI server resources efficiently and run tests in optimal time.&lt;/p&gt;

&lt;p&gt;I described &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation" rel="noopener noreferrer"&gt;how Queue Mode splits Ruby and JavaScript tests in parallel with a dynamic test suite split&lt;/a&gt;. You can learn from that article about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Github Actions build matrix to run parallel tests
&lt;/h2&gt;

&lt;p&gt;Github Actions has a &lt;a href="https://docs.github.com/en/actions/learn-github-actions/managing-complex-workflows#using-a-build-matrix" rel="noopener noreferrer"&gt;build matrix feature&lt;/a&gt; that allows running many jobs at the same time. You can use it to run your Minitest tests between parallel jobs.&lt;/p&gt;

&lt;p&gt;Below is a full Github Actions YML config for a Rails project and Minitest.&lt;br&gt;
The tests are split with &lt;code&gt;knapsack_pro&lt;/code&gt; Ruby gem and Queue Mode.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Main&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="c1"&gt;# If you need DB like PostgreSQL, Redis then define service below.&lt;/span&gt;
    &lt;span class="c1"&gt;# https://github.com/actions/example-services/tree/master/.github/workflows&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:10.8&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;5432:5432&lt;/span&gt;
        &lt;span class="c1"&gt;# needed because the postgres container does not provide a healthcheck&lt;/span&gt;
        &lt;span class="c1"&gt;# tmpfs makes DB faster by using RAM&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
          &lt;span class="s"&gt;--mount type=tmpfs,destination=/var/lib/postgresql/data&lt;/span&gt;
          &lt;span class="s"&gt;--health-cmd pg_isready&lt;/span&gt;
          &lt;span class="s"&gt;--health-interval 10s&lt;/span&gt;
          &lt;span class="s"&gt;--health-timeout 5s&lt;/span&gt;
          &lt;span class="s"&gt;--health-retries 5&lt;/span&gt;
      &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;6379:6379&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--entrypoint redis-server&lt;/span&gt;

    &lt;span class="c1"&gt;# https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idstrategymatrix&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;fail-fast&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Set N number of parallel jobs you want to run tests on.&lt;/span&gt;
        &lt;span class="c1"&gt;# Use higher number if you have slow tests to split them on more parallel jobs.&lt;/span&gt;
        &lt;span class="c1"&gt;# Remember to update ci_node_index below to 0..N-1&lt;/span&gt;
        &lt;span class="na"&gt;ci_node_total&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;8&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="c1"&gt;# set N-1 indexes for parallel jobs&lt;/span&gt;
        &lt;span class="c1"&gt;# When you run 2 parallel jobs then first job will have index 0, the second job will have index 1 etc&lt;/span&gt;
        &lt;span class="na"&gt;ci_node_index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;5&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;6&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
      &lt;span class="na"&gt;PGHOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
      &lt;span class="na"&gt;PGUSER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
      &lt;span class="c1"&gt;# Rails verifies Time Zone in DB is the same as time zone of the Rails app&lt;/span&gt;
      &lt;span class="na"&gt;TZ&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Europe/Warsaw"&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Ruby&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ruby/setup-ruby@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# Not needed with a .ruby-version file&lt;/span&gt;
          &lt;span class="na"&gt;ruby-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.7&lt;/span&gt;
          &lt;span class="c1"&gt;# runs 'bundle install' and caches installed gems automatically&lt;/span&gt;
          &lt;span class="na"&gt;bundler-cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create DB&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bin/rails db:prepare&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_TEST_SUITE_TOKEN_MINITEST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.KNAPSACK_PRO_TEST_SUITE_TOKEN_MINITEST }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_TOTAL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_total }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_INDEX&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_index }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_FIXED_QUEUE_SPLIT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_LOG_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bundle exec rake knapsack_pro:queue:minitest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;As you can see the slow Minitest test suite doesn't need to be an issue for you. QA, Testers, or Automation Engineers could benefit from improving the CI build speed and allowing their software developers team to deliver products faster. You can learn more at &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-run-minitest-on-github-actions-with-parallel-jobs-using-build-matrix" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>github</category>
      <category>testing</category>
    </item>
    <item>
      <title>ruby/setup-ruby or actions/cache - caching Ruby gems on Github Actions</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Tue, 06 Apr 2021 18:51:25 +0000</pubDate>
      <link>https://dev.to/arturt/ruby-setup-ruby-or-actions-cache-caching-ruby-gems-on-github-actions-51c2</link>
      <guid>https://dev.to/arturt/ruby-setup-ruby-or-actions-cache-caching-ruby-gems-on-github-actions-51c2</guid>
      <description>&lt;p&gt;How to start CI build faster by loading Ruby gems from cache on Github Actions? You can start running your tests for a Ruby on Rails project quicker if you manage to set up all dependencies in a short amount of time. Caching can be helpful with that. Ruby gems needed for your project can be cached by Github Actions and thanks to that they can be loaded much faster when you run a new CI build.&lt;/p&gt;

&lt;p&gt;You will learn how to configure Github Actions using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/actions/cache"&gt;actions/cache&lt;/a&gt; - it's a popular solution to cache Ruby gems.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/ruby/setup-ruby"&gt;ruby/setup-ruby&lt;/a&gt; - it's a solution to install a specific Ruby version and cache Ruby gems with bundler. Two features in one action.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  actions/cache - just cache dependencies
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/actions/cache"&gt;Actions/cache&lt;/a&gt; is a popular solution that can be used to save data into the cache and restore it during the next CI build. It's often used for Ruby on Rails projects that also use &lt;code&gt;actions/setup-ruby&lt;/code&gt; for managing the Ruby version on Github Actions.&lt;/p&gt;

&lt;p&gt;Let's look at the Github Actions caching config example using &lt;code&gt;actions/cache&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/main.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Main&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vendor/bundle&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ runner.os }}-gems-${{ hashFiles('**/Gemfile.lock') }}&lt;/span&gt;
          &lt;span class="na"&gt;restore-keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;${{ runner.os }}-gems-&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bundle install&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bundle config path vendor/bundle&lt;/span&gt;
          &lt;span class="s"&gt;bundle install --jobs 4 --retry 3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You need to specify a directory path that will be cached. It's &lt;code&gt;vendor/bundle&lt;/code&gt; in our case.&lt;/li&gt;
&lt;li&gt;You also generate a unique cache &lt;code&gt;key&lt;/code&gt; based on the OS version and &lt;code&gt;Gemfile.lock&lt;/code&gt; file. When you change the operating system version or you install a new gem and &lt;code&gt;Gemfile.lock&lt;/code&gt; changes then as a result the new &lt;code&gt;key&lt;/code&gt; value will be generated.&lt;/li&gt;
&lt;li&gt;You need to configure the bundler to install all your Ruby gems to the directory &lt;code&gt;vendor/bundle&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;You can use bundler options:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--jobs 4&lt;/code&gt; - install gems using parallel workers. This allows faster gems installation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--retry 3&lt;/code&gt; - makes 3 attempts to connect to Rubygems if there is a network issue (for instance temporary downtime of Rubygems.org)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you would like to see the full YAML config for the Github Actions and Rails project you can take a look at some of our articles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.knapsackpro.com/2019/how-to-run-rspec-on-github-actions-for-ruby-on-rails-app-using-parallel-jobs"&gt;How to run RSpec on GitHub Actions for Ruby on Rails app using parallel jobs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.knapsackpro.com/2019/github-actions-ci-config-for-ruby-on-rails-project-with-mysql-redis-elasticsearch-how-to-run-parallel-tests"&gt;GitHub Actions CI config for Ruby on Rails project with MySQL, Redis, Elasticsearch - how to run parallel tests&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.knapsackpro.com/2020/how-to-run-slow-rspec-files-on-github-actions-with-parallel-jobs-by-doing-an-auto-split-of-the-spec-file-by-test-examples"&gt;How to run slow RSpec files on Github Actions with parallel jobs by doing an auto split of the spec file by test examples&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.knapsackpro.com/2021/cucumber-bdd-testing-using-github-actions-parallel-jobs-to-run-tests-quicker"&gt;Cucumber BDD testing using Github Actions parallel jobs to run tests quicker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ruby/setup-ruby - install Ruby and cache gems
&lt;/h2&gt;

&lt;p&gt;In the previous section, we mentioned the &lt;code&gt;actions/setup-ruby&lt;/code&gt; is often used with Ruby on Rails projects. The &lt;code&gt;actions/setup-ruby&lt;/code&gt; has been deprecated so it's recommended to use &lt;code&gt;ruby/setup-ruby&lt;/code&gt; action nowadays. It already has caching feature that you could use. Let's see how.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# .github/workflows/main.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Main&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ruby/setup-ruby@v1&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Not needed with a .ruby-version file&lt;/span&gt;
        &lt;span class="na"&gt;ruby-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.7&lt;/span&gt;
        &lt;span class="c1"&gt;# runs 'bundle install' and caches installed gems automatically&lt;/span&gt;
        &lt;span class="na"&gt;bundler-cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;

    &lt;span class="c1"&gt;# run RSpec tests&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bundle exec rspec&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see using &lt;code&gt;ruby/setup-ruby&lt;/code&gt; for managing the Ruby version and gems caching is much simpler. You just add an option &lt;code&gt;bundler-cache: true&lt;/code&gt; and that's it.&lt;/p&gt;

&lt;p&gt;You can read in &lt;a href="https://github.com/ruby/setup-ruby#caching-bundle-install-automatically"&gt;&lt;code&gt;ruby/setup-ruby&lt;/code&gt; documentation&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;"It is also possible to cache gems manually, but this is not recommended because it is verbose and very difficult to do correctly. There are many concerns which means using &lt;code&gt;actions/cache&lt;/code&gt; is never enough for caching gems (e.g., incomplete cache key, cleaning old gems when restoring from another key, correctly hashing the lockfile if not checked in, OS versions, ABI compatibility for ruby-head, etc). So, please use &lt;code&gt;bundler-cache: true&lt;/code&gt; instead..."&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You saw 2 ways of caching Ruby gems on Github Actions. There are also other ways to make your CI build faster like running tests in parallel. You can learn more about &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation"&gt;test parallelisation here&lt;/a&gt; or simply check the &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-how-to-load-ruby-gems-from-cache-on-github-actions"&gt;Knapsack Pro&lt;/a&gt; homepage.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>github</category>
      <category>testing</category>
      <category>rails</category>
    </item>
    <item>
      <title>Parallel scaling RSpec tests on Buildkite to increase CI build speed</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Wed, 31 Mar 2021 13:53:45 +0000</pubDate>
      <link>https://dev.to/arturt/parallel-scaling-rspec-tests-on-buildkite-to-increase-ci-build-speed-5e2k</link>
      <guid>https://dev.to/arturt/parallel-scaling-rspec-tests-on-buildkite-to-increase-ci-build-speed-5e2k</guid>
      <description>&lt;p&gt;If your RSpec test suite runs for hours, you could shorten that to just minutes with parallel jobs using Buildkite agents. You will learn how to run parallel tests in optimal CI build time for your Ruby on Rails project. I will also show you a few useful things for Buildkite CI like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A real RSpec test suite taking 13 hours and 32 minutes executed in only 5 minutes 20 seconds by using 151 parallel Buildkite agents with &lt;a href="https://docs.knapsackpro.com/knapsack_pro-ruby/guide/" rel="noopener noreferrer"&gt;knapsack_pro Ruby gem&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;How to distribute test files between parallel jobs using Queue Mode in &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt; to utilize CI machines optimally.&lt;/li&gt;
&lt;li&gt;A simple example of CI Buildkite parallelism config.&lt;/li&gt;
&lt;li&gt;An advanced example of Buildkite config with Elastic CI Stack for AWS.&lt;/li&gt;
&lt;li&gt;Why you might want to use AWS Spot Instances&lt;/li&gt;
&lt;li&gt;How to automatically split slow RSpec test files by test examples (test cases) between parallel Buildkite agents&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A real RSpec test suite taking 13 hours and executed in only 5 minutes
&lt;/h2&gt;

&lt;p&gt;I'd like to show you the results of a real project for running RSpec parallel tests. The project we are looking at here is huge and its RSpec tests run time is 13 hours and 32 minutes. It's super slow. You can imagine creating a git commit and waiting 13 hours to find out the next day that your code breaks something else in the project. You can't work like that!&lt;/p&gt;

&lt;p&gt;The solution for this is to run tests in parallel on many CI machines using Buildkite agents. Each CI machine has a Buildkite agent installed that will run a chunk of the RSpec test suite. Below you can see an example of running ~13 hours test suite across 151 parallel Buildkite agents.&lt;br&gt;
This allows running the whole RSpec test suite in just 5 minutes 20 seconds!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F134yddlooaq98c9ghks6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F134yddlooaq98c9ghks6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above graph comes from the Knapsack Pro &lt;a href="https://knapsackpro.com/dashboard?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;user dashboard&lt;/a&gt;. 151 parallel jobs are a lot of machines. It would take the whole screen to show you 151 bars. You can only see the last few bars on the graph. The bars are showing how the RSpec test files were split between parallel machines.&lt;/p&gt;

&lt;p&gt;You can see that each parallel machine finishes work at a similar time. The right edges of all of the bars are pretty close to each other. This is the important part. You want to ensure the RSpec work is distributed evenly between parallel jobs. This way you can avoid a bottleneck - a slow job running too many test files. I'll show you how to do it.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to distribute test files between parallel jobs using Queue Mode in Knapsack Pro to utilize CI machines optimally
&lt;/h2&gt;

&lt;p&gt;To run CI build as fast as possible we need to utilize our available resources as much as we can. This means the work of running RSpec tests should be split between parallel machines evenly.&lt;/p&gt;

&lt;p&gt;The bigger the test suite, the longer it takes to run it and more edge cases can happen when you split running tests among many machines in the network. Some of the possible edge cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;some of the test files take longer than others to run (for instance E2E test files)&lt;/li&gt;
&lt;li&gt;some of the test cases fail and run quicker, some don't and run longer. This affects the overall time spent by the CI machine on running your tests.&lt;/li&gt;
&lt;li&gt;some of the test cases take longer because they must connect with network/external API etc - this adds uncertainty to their execution time&lt;/li&gt;
&lt;li&gt;some of the parallel machines spend more time on boot time:

&lt;ul&gt;
&lt;li&gt;installing Ruby gems takes longer&lt;/li&gt;
&lt;li&gt;loading Ruby gems from cache is slow&lt;/li&gt;
&lt;li&gt;or simply the CI provider has not started your job yet&lt;/li&gt;
&lt;li&gt;or maybe you have not enough available machines in the pool of available agents&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Multiple things can disrupt the spread of work between parallel nodes.&lt;/p&gt;

&lt;p&gt;Our ultimate goal is to ensure all machines finish work at a similar time because this means every machine received a workload that was suitable to its available capabilities. This means that, if a machine started work very late it will run only a small part of the tests. If another machine started work very early it will run more tests. This will even out the ending time between parallel machines. All this is possible thanks to Queue Mode in knapsack_pro Ruby gem, it will take care of running tests in parallel for you. &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation" rel="noopener noreferrer"&gt;Queue Mode splits test files dynamically between parallel jobs to ensure the jobs completes at the same time&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can see an example of running a small RSpec test suite across 2 parallel Buildkite agents for the Ruby on Rails project.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/2Pp9icUJVIg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  A simple example of CI Buildkite parallelism config
&lt;/h2&gt;

&lt;p&gt;Here is a very simple example of Buildkite config to run 2 parallel jobs as you can see on the screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4t23bm6zv0b8kwll0cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4t23bm6zv0b8kwll0cg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# You should hide you secrets like API token&lt;/span&gt;
  &lt;span class="c1"&gt;# Please follow https://buildkite.com/docs/pipelines/secrets&lt;/span&gt;
  &lt;span class="na"&gt;KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;204abb31f698a6686120a40efeff31e5"&lt;/span&gt;
  &lt;span class="c1"&gt;# allow to run the same set of test files on job retry&lt;/span&gt;
  &lt;span class="c1"&gt;# https://github.com/KnapsackPro/knapsack_pro-ruby#knapsack_pro_fixed_queue_split-remember-queue-split-on-retry-ci-node&lt;/span&gt;
  &lt;span class="na"&gt;KNAPSACK_PRO_FIXED_QUEUE_SPLIT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bundle&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;exec&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;rake&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;knapsack_pro:queue:rspec"&lt;/span&gt;
    &lt;span class="na"&gt;parallelism&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that you should hide your credentials like the Knapsack Pro API token and not commit it into your repository. You can refer to the &lt;a href="https://buildkite.com/docs/pipelines/secrets" rel="noopener noreferrer"&gt;Buildkite secrets documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  An advanced Buildkite config with Elastic CI Stack for AWS
&lt;/h2&gt;

&lt;p&gt;When you want to run your big RSpec project on dozen or even hundreds of parallel machines you need powerful resources.  In such a case, you can follow the &lt;a href="https://buildkite.com/docs/tutorials/elastic-ci-stack-aws" rel="noopener noreferrer"&gt;Buildkite tutorial about AWS setup&lt;/a&gt;. The Elastic CI Stack for AWS gives you a private, autoscaling Buildkite Agent cluster in your own AWS account.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Spot Instances can save you money
&lt;/h3&gt;

&lt;p&gt;AWS offers Spot Instances. These machines are cheap but they can be withdrawn by AWS at any time. This means that you can run cheap machines for your CI but from time to time the AWS may kill one of your parallel machines. Such a scenario can be handled by the &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt;. It remembers the set of test files allocated to the AWS machine that was running the tests. When the machine is withdrawn and later on retried by the Buildkite retry feature then the proper test files will be executed as you would expect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Buildkite retry feature
&lt;/h3&gt;

&lt;p&gt;Buildkite config allows for &lt;a href="https://buildkite.com/docs/pipelines/command-step#automatic-retry-attributes" rel="noopener noreferrer"&gt;automatic retry of your job&lt;/a&gt;. This can be helpful when you use AWS Spot Instances.&lt;br&gt;
When AWS shuts down your machine during test runtime due to withdrawal then Buildkite can automatically run a new job on a new machine.&lt;/p&gt;

&lt;p&gt;Another use case for the automatic retry is when you have &lt;a href="https://docs.knapsackpro.com/2021/fix-intermittently-failing-ci-builds-flaky-tests-rspec" rel="noopener noreferrer"&gt;flaky Ruby tests&lt;/a&gt; that sometimes pass green or fail red. You can use Buildkite to retry the failing job in such a case.&lt;/p&gt;

&lt;p&gt;My recommendation is to use the &lt;a href="https://knapsackpro.com/faq/question/how-to-retry-failed-tests-flaky-tests?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;rspec-retry gem&lt;/a&gt; as a first choice. RSpec-retry gem will retry only failing test cases instead of all test files assigned to the parallel machine.&lt;br&gt;
The second option you can try is to rely on the &lt;a href="https://buildkite.com/docs/pipelines/command-step#automatic-retry-attributes" rel="noopener noreferrer"&gt;Buildkite retry feature&lt;/a&gt;. It will retry the CI node and all tests assigned to it by Knapsack Pro API.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to automatically split large slow RSpec test files by test examples (test cases) between parallel Buildkite agents
&lt;/h2&gt;

&lt;p&gt;Slow RSpec test files are often related to E2E tests, the browser tests like capybara feature specs. They can run for a few or sometimes even dozens of minutes. They could become a bottleneck if the parallel job has to run a single test file for 10 minutes while other parallel jobs complete a few smaller test files in 5 minutes.&lt;/p&gt;

&lt;p&gt;There is a solution for that! You can use Knapsack Pro with &lt;a href="https://knapsackpro.com/faq/question/how-to-split-slow-rspec-test-files-by-test-examples-by-individual-it?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;RSpec split by examples feature&lt;/a&gt; that will automatically detect slow RSpec test files in your project and split them between parallel Buildkite agents by test examples (test cases).&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cv482s65rmzzyj6eaw8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cv482s65rmzzyj6eaw8.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see a combination of a few elements like Buildkite CI with cloud infrastructure solutions like AWS and an optimal split of test files using Knapsack Pro can improve significantly the work of your team.&lt;br&gt;
With &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt; you can achieve great results and super fast CI builds. Feel free to &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-auto-scaling-buildkite-ci-build-agents-for-rspec-run-parallel-jobs-in-minutes-instead-of-hours" rel="noopener noreferrer"&gt;try it&lt;/a&gt; and join other happy Buildkite users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Related articles
&lt;/h3&gt;

&lt;p&gt;If you are looking for a Docker config you can also see repository examples at the end of the article: &lt;a href="https://docs.knapsackpro.com/2017/auto-balancing-7-hours-tests-between-100-parallel-jobs-on-ci-buildkite-example" rel="noopener noreferrer"&gt;Auto balancing 7 hours tests between 100 parallel jobs on Buildkite CI&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>buildkite</category>
      <category>testing</category>
    </item>
    <item>
      <title>Run Cucumber tests on Github Actions fast with parallel jobs</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Fri, 26 Mar 2021 19:05:13 +0000</pubDate>
      <link>https://dev.to/arturt/run-cucumber-tests-on-github-actions-fast-with-parallel-jobs-3jhk</link>
      <guid>https://dev.to/arturt/run-cucumber-tests-on-github-actions-fast-with-parallel-jobs-3jhk</guid>
      <description>&lt;p&gt;Cucumber employs Behavior-Driven Development (BDD) for testing your application. This type of test is often time-consuming when running in the browser. You will learn how to run Cucumber tests on Github Actions using parallel jobs to execute the test suite much faster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrfkdlmg49im1hzcfhfd.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrfkdlmg49im1hzcfhfd.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Github Actions matrix strategy
&lt;/h2&gt;

&lt;p&gt;You can use the &lt;a href="https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstrategymatrix" rel="noopener noreferrer"&gt;Github Actions matrix strategy&lt;/a&gt; to run parallel jobs. You will need to divide your Cucumber test files between the parallel jobs in a way that work will be balanced out between the jobs.&lt;/p&gt;

&lt;p&gt;It’s not that simple to do because often Cucumber tests can take a different amount of time. One test file can have many test cases, the other can have only a few but very complex ones, etc.&lt;/p&gt;

&lt;p&gt;There are often more steps in your CI pipeline like installing dependencies, loading data from the cache and each step can take a different amount of time per parallel job before even Cucumber tests are started. The steps affect the overall CI build speed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkutxkg0306h7x819rw9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkutxkg0306h7x819rw9g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What you would like to achieve is to run parallel jobs in a way that they always finish the execution of Cucumber tests at a similar time. Thanks to that you will avoid lagging jobs that could be a bottleneck in your CI build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbidv6t1wsh173sp24xx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkbidv6t1wsh173sp24xx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamically split Cucumber tests using Queue Mode
&lt;/h2&gt;

&lt;p&gt;To get optimal CI build execution time you need to ensure the work between parallel jobs is split in such a way as to avoid bottleneck slow job. To achieve that you can split Cucumber test files in a dynamic way between the parallel jobs using &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-cucumber-bdd-testing-using-github-actions-parallel-jobs-to-run-tests-quicker" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt; Queue Mode and &lt;code&gt;knapsack_pro&lt;/code&gt; ruby gem.&lt;/p&gt;

&lt;p&gt;Knapsack Pro API will take care of coordinating how tests are divided between parallel jobs. On the API side, there is a Queue with a list of your test files and each parallel job on Github Actions is running Cucumber tests via the &lt;code&gt;knapsack_pro&lt;/code&gt; Ruby gem. The &lt;code&gt;knapsack_pro&lt;/code&gt; gem asks Queue API for a set of test files to run and after it gets executed then the gem asks for another set of test files until the Queue is consumed. This ensures that all parallel jobs finish running tests at a very similar time so that you can avoid bottleneck jobs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuz3pby686thrfbmyi3pj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuz3pby686thrfbmyi3pj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can learn more about the &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation" rel="noopener noreferrer"&gt;dynamic test suite split in Queue Mode&lt;/a&gt; or check the video below.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/hUEB1XDKEFY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Github Actions parallel jobs config for Cucumber
&lt;/h2&gt;

&lt;p&gt;Here is the full Github Actions YAML config example for the Cucumber test suite in a Ruby on Rails project using &lt;code&gt;knapsack_pro&lt;/code&gt; gem to run Cucumber tests between parallel jobs.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="c1"&gt;# .github/workflows/main.yml&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Main&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="c1"&gt;# If you need DB like PostgreSQL, Redis then define service below.&lt;/span&gt;
    &lt;span class="c1"&gt;# https://github.com/actions/example-services/tree/master/.github/workflows&lt;/span&gt;
    &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;postgres&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:10.8&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;5432:5432&lt;/span&gt;
        &lt;span class="c1"&gt;# needed because the postgres container does not provide a healthcheck&lt;/span&gt;
        &lt;span class="c1"&gt;# tmpfs makes DB faster by using RAM&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;-&lt;/span&gt;
          &lt;span class="s"&gt;--mount type=tmpfs,destination=/var/lib/postgresql/data&lt;/span&gt;
          &lt;span class="s"&gt;--health-cmd pg_isready&lt;/span&gt;
          &lt;span class="s"&gt;--health-interval 10s&lt;/span&gt;
          &lt;span class="s"&gt;--health-timeout 5s&lt;/span&gt;
          &lt;span class="s"&gt;--health-retries 5&lt;/span&gt;

      &lt;span class="na"&gt;redis&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;6379:6379&lt;/span&gt;
        &lt;span class="na"&gt;options&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--entrypoint redis-server&lt;/span&gt;

    &lt;span class="c1"&gt;# https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idstrategymatrix&lt;/span&gt;
    &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;fail-fast&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
      &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Set N number of parallel jobs you want to run tests on.&lt;/span&gt;
        &lt;span class="c1"&gt;# Use higher number if you have slow tests to split them on more parallel jobs.&lt;/span&gt;
        &lt;span class="c1"&gt;# Remember to update ci_node_index below to 0..N-1&lt;/span&gt;
        &lt;span class="na"&gt;ci_node_total&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;8&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="c1"&gt;# set N-1 indexes for parallel jobs&lt;/span&gt;
        &lt;span class="c1"&gt;# When you run 2 parallel jobs then first job will have index 0, the second job will have index 1 etc&lt;/span&gt;
        &lt;span class="na"&gt;ci_node_index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;0&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;2&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;4&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;5&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;6&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;7&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Ruby&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-ruby@v1&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ruby-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2.7&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/cache@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;vendor/bundle&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ runner.os }}-gems-${{ hashFiles('**/Gemfile.lock') }}&lt;/span&gt;
          &lt;span class="na"&gt;restore-keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;${{ runner.os }}-gems-&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bundle install&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bundle config path vendor/bundle&lt;/span&gt;
          &lt;span class="s"&gt;bundle install --jobs 4 --retry 3&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Create DB&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="c1"&gt;# use localhost for the host here because we have specified a container for the job.&lt;/span&gt;
          &lt;span class="c1"&gt;# If we were running the job on the VM this would be postgres&lt;/span&gt;
          &lt;span class="na"&gt;PGHOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
          &lt;span class="na"&gt;PGUSER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
          &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bin/rails db:prepare&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PGHOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localhost&lt;/span&gt;
          &lt;span class="na"&gt;PGUSER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres&lt;/span&gt;
          &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_TEST_SUITE_TOKEN_CUCUMBER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.KNAPSACK_PRO_TEST_SUITE_TOKEN_CUCUMBER }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_TOTAL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_total }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_CI_NODE_INDEX&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ matrix.ci_node_index }}&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_FIXED_QUEUE_SPLIT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_LOG_LEVEL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;info&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;bundle exec rake knapsack_pro:queue:cucumber&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is the view from Github Actions showing that we run 8 parallel jobs for the CI build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl02sbehm38mqgab715k5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl02sbehm38mqgab715k5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I hope you find this example useful. If you would like to learn more about &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=devto-cucumber-bdd-testing-using-github-actions-parallel-jobs-to-run-tests-quicker" rel="noopener noreferrer"&gt;Knapsack Pro please check our homepage&lt;/a&gt; and see a &lt;a href="https://docs.knapsackpro.com/integration/" rel="noopener noreferrer"&gt;list of supported test runners for parallel testing in Ruby, JavaScript&lt;/a&gt;, etc.&lt;/p&gt;




&lt;p&gt;Originally published at &lt;a href="https://docs.knapsackpro.com/2021/cucumber-bdd-testing-using-github-actions-parallel-jobs-to-run-tests-quicker" rel="noopener noreferrer"&gt;https://docs.knapsackpro.com/2021/cucumber-bdd-testing-using-github-actions-parallel-jobs-to-run-tests-quicker&lt;/a&gt;&lt;/p&gt;

</description>
      <category>github</category>
      <category>cucumber</category>
      <category>rails</category>
      <category>ruby</category>
    </item>
    <item>
      <title>Best add-ons for Ruby on Rails project hosted on Heroku</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Mon, 08 Mar 2021 14:41:15 +0000</pubDate>
      <link>https://dev.to/arturt/best-add-ons-for-ruby-on-rails-project-hosted-on-heroku-pgf</link>
      <guid>https://dev.to/arturt/best-add-ons-for-ruby-on-rails-project-hosted-on-heroku-pgf</guid>
      <description>&lt;p&gt;After working for over 8 years with Heroku and Ruby on Rails projects I have my own favorite set of Heroku add-ons that work great with Rails apps. You are about to learn about the add-ons that come in handy in your daily Ruby developer life.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ouYL8fZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ie4j2ws9g8u2rlc4s8e2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ouYL8fZF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ie4j2ws9g8u2rlc4s8e2.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Heroku add-ons
&lt;/h2&gt;

&lt;p&gt;Here it is, a list of my favorite Heroku add-ons from Heroku Marketplace and why I choose them for my Ruby on Rails projects hosted on Heroku.&lt;/p&gt;

&lt;h3&gt;
  
  
  Heroku Scheduler
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/scheduler"&gt;Heroku Scheduler&lt;/a&gt; can run scheduled tasks every 10 minutes, every hour, or every day. I use it to run my scheduled rake tasks. For instance every day I run a rake task that will send a summary of users who signed up in the last 24 hours to my mailbox.&lt;/p&gt;

&lt;p&gt;Heroku Scheduler add-on is free. The only limitation is that it has fewer options than the cron in the Unix system. If you need to run a rake task every Monday then you need to set up a rake task as a daily task in Heroku Scheduler and do a check of the day in the rake task itself to skip it when needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# lib/tasks/schedule/notify_users_about_past_due_subscription.rake&lt;/span&gt;
&lt;span class="n"&gt;namespace&lt;/span&gt; &lt;span class="ss"&gt;:schedule&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;desc&lt;/span&gt; &lt;span class="s1"&gt;'Send notification about past due subscriptions to users'&lt;/span&gt;
  &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="ss"&gt;notify_users_about_past_due_subscription: :environment&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="no"&gt;Time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;current&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;monday?&lt;/span&gt;
      &lt;span class="no"&gt;Billing&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;NotifyUsersAboutPastDueSubscriptionWorker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;perform_async&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;
      &lt;span class="no"&gt;Rails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Skip schedule:notify_users_about_past_due_subscription task."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;end&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  New Relic APM
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/newrelic"&gt;New Relic&lt;/a&gt; add-on does application performance monitoring. It's one of my favorite add-ons. It allows to track each process like puma/unicorn/sidekiq per dyno and its performance. You can see which Rails controller actions take the most time. You can see your API endpoints with the highest throughput and those which are time-consuming. New Relic helped me many times to debug bottlenecks in my app and thanks to that I was able to make &lt;a href="https://knapsackpro.com/?utm_source=docs_knapsackpro&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=best-heroku-add-ons-for-ruby-on-rails-project"&gt;Knapsack Pro API&lt;/a&gt; with an average 50ms response time. Who said the Rails app has to be slow? :)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wFqtPKbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07sbglos2lr3gz5w7tyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wFqtPKbH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07sbglos2lr3gz5w7tyi.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Rollbar
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/rollbar"&gt;Rollbar&lt;/a&gt; allows for exception tracking in your Ruby code and also in JS code on the front end side. It has a generous free plan with a 5000 exception limit per month.&lt;/p&gt;

&lt;p&gt;You can easily ignore some common Rails exceptions to stay within the free plan limit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# config/initializers/rollbar.rb&lt;/span&gt;
&lt;span class="no"&gt;Rollbar&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configure&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;access_token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ROLLBAR_ACCESS_TOKEN'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="no"&gt;Rails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test?&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="no"&gt;Rails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;development?&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kp"&gt;false&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;

  &lt;span class="c1"&gt;# Add exception class names to the exception_level_filters hash to&lt;/span&gt;
  &lt;span class="c1"&gt;# change the level that exception is reported at. Note that if an exception&lt;/span&gt;
  &lt;span class="c1"&gt;# has already been reported and logged the level will need to be changed&lt;/span&gt;
  &lt;span class="c1"&gt;# via the rollbar interface.&lt;/span&gt;
  &lt;span class="c1"&gt;# Valid levels: 'critical', 'error', 'warning', 'info', 'debug', 'ignore'&lt;/span&gt;
  &lt;span class="c1"&gt;# 'ignore' will cause the exception to not be reported at all.&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ActionController::RoutingError'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ActionController::InvalidAuthenticityToken'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ActionController::BadRequest'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ActiveRecord::RecordNotFound'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Rack::Timeout::RequestTimeoutException'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Rack::QueryParser::InvalidParameterError'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exception_level_filters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;merge!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'ActionDispatch::Http::MimeNegotiation::InvalidType'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s1"&gt;'ignore'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;environment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'ROLLBAR_ENV'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;presence&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="no"&gt;Rails&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;env&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Logentries
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/logentries"&gt;Logentries&lt;/a&gt; - collects your logs from Heroku standard output so that you can browse them later on. If you need to find info about an issue that happened a few days ago in your logs then Logentries might be helpful.&lt;/p&gt;

&lt;p&gt;Of course, you could use &lt;a href="https://devcenter.heroku.com/articles/heroku-cli"&gt;Heroku Command Line Interface&lt;/a&gt; and run &lt;code&gt;heroku logs -n 10000 --app my-heroku-app&lt;/code&gt; command in terminal to browse logs for the last 10,000 lines but this method has limitations. You can't go that much in past logs or easily filter logs as in Logentries.&lt;/p&gt;

&lt;p&gt;Logentries has a 5 GB and 7 days retention period in a free plan. This is enough for small Rails apps.&lt;/p&gt;

&lt;p&gt;A nice feature I like in Logentries is an option to save the query and later on quickly browse logs by it. You can also display charts based on logs. Maybe you want to see how often a particular worker in Sidekiq has been called? You could visualize it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Redis Cloud
&lt;/h3&gt;

&lt;p&gt;If you use Redis in your Ruby on Rails app then &lt;a href="https://elements.heroku.com/addons/rediscloud"&gt;Redis Cloud&lt;/a&gt; is your add-on. It has a free plan and paid plans are more affordable than other add-ons have.&lt;/p&gt;

&lt;p&gt;Redis Cloud add-on does automatic backups of your data and offers a nice web UI to preview the live Redis usage and historical usage of your database instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_KBojn-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4k4i15amela9z8pu2x94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_KBojn-V--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4k4i15amela9z8pu2x94.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I like to use Redis Cloud + sidekiq gem in my Rails apps. Also, Redis is useful if you need to cache some data quickly in the memory and expire it after some time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;redis_connection&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="c1"&gt;# use REDISCLOUD_URL when app is running on Heroku,&lt;/span&gt;
  &lt;span class="c1"&gt;# or fallback to local Redis (useful for development)&lt;/span&gt;
  &lt;span class="ss"&gt;url: &lt;/span&gt;&lt;span class="no"&gt;ENV&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'REDISCLOUD_URL'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'redis://localhost:6379/0'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="c1"&gt;# tune network timeouts to be a little more lenient when you are seeing occasional timeout&lt;/span&gt;
  &lt;span class="c1"&gt;# errors for Heroku Redis Cloud addon&lt;/span&gt;
  &lt;span class="c1"&gt;# https://github.com/mperham/sidekiq/wiki/Using-Redis#life-in-the-cloud&lt;/span&gt;
  &lt;span class="ss"&gt;timeout: &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;redis_connection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'my-key-name'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'this value will expire in 1 hour'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Twilio SendGrid
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/sendgrid"&gt;SendGrid&lt;/a&gt; is a free add-on that allows you to start sending emails from your Ruby on Rails. You can even connect your domain to it so your users get emails from your domain.&lt;/p&gt;

&lt;p&gt;There are free 12,000 emails per month in the free plan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Heroku Add-on to save you time &amp;amp; money
&lt;/h2&gt;

&lt;p&gt;Here are a few of my favorite add-ons that will help you save money and time in your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  AutoIdle
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/autoidle"&gt;AutoIdle&lt;/a&gt; lets you save money by automatically putting your staging and review apps to sleep on Heroku. I use it to turn off my web and worker dyno for the staging app when there is no traffic to the app. No more paying for Heroku resources during the night and weekends. ;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uUoibHQr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xfzm0dgwirm9r0b955qz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uUoibHQr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xfzm0dgwirm9r0b955qz.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Rails Autoscale
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/rails-autoscale"&gt;Rails Autoscale&lt;/a&gt; is a powerful add-on that will help you save money on Heroku. It will measure requests queue time and based on that add or remove dynos for your web processes. If you have higher traffic during the day it will add more dynos. During the night when the traffic is low, it will remove dynos.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--P5rdDkYD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fshobjtquad6iplzzja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P5rdDkYD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fshobjtquad6iplzzja.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rails Autoscale can also track your worker queue. For instance, if you have a lot of jobs scheduled in Sidekiq then Rails Autoscale will add more worker dynos to process your job queue faster. It can even shut down worker dyno when there are no jobs to be processed which can save you even more money.&lt;/p&gt;

&lt;h3&gt;
  
  
  Knapsack Pro
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://elements.heroku.com/addons/knapsack-pro"&gt;Knapsack Pro&lt;/a&gt; is a Heroku add-on and ruby gem that can run your Rails tests in RSpec, Cucumber, Minitest, etc, and automatically split the tests between parallel machines on any CI server. It works with Heroku CI, CircleCI, Buildkite, Travis CI, etc. It will help you save time by doing &lt;a href="https://dev.to/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation"&gt;a dynamic split of tests with Queue Mode&lt;/a&gt; to ensure all parallel jobs finish work at a similar time. This way you optimize your CI build runs and save the most time.&lt;/p&gt;

&lt;p&gt;Below you can see an example of the optimal distribution of tests, where each parallel CI machine performs tests for 10 minutes, thanks to which the entire CI build lasts only 10 minutes instead of 40 if you would run tests on a single CI server only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qVVFXD5o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7igru7kk36qhpvz36804.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qVVFXD5o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7igru7kk36qhpvz36804.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've been working on the &lt;a href="https://elements.heroku.com/addons/knapsack-pro"&gt;Knapsack Pro add-on&lt;/a&gt; and I'd love to hear your feedback if you give it a try.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>heroku</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How to run fast RSpec tests on CircleCI with parallel jobs and have nice JUnit XML reports in CircleCI web UI</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Wed, 03 Mar 2021 19:19:30 +0000</pubDate>
      <link>https://dev.to/arturt/how-to-run-fast-rspec-tests-on-circleci-with-parallel-jobs-and-have-nice-junit-xml-reports-in-circleci-web-ui-1912</link>
      <guid>https://dev.to/arturt/how-to-run-fast-rspec-tests-on-circleci-with-parallel-jobs-and-have-nice-junit-xml-reports-in-circleci-web-ui-1912</guid>
      <description>&lt;p&gt;You will learn how to run RSpec tests for your Ruby on Rails project on CircleCI with parallel jobs to shorten the running time of your CI build. Moreover, you will learn how to configure JUnit formatter to generate an XML report for your tests to show failing RSpec test examples nicely in CircleCI web UI. Finally, you will see how to automatically detect slow spec files and divide their test examples between parallel jobs to eliminate the bottleneck job that’s taking too much time to run tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ruby gems to configure your RoR project
&lt;/h2&gt;

&lt;p&gt;Here are the key elements you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/sj26/rspec_junit_formatter" rel="noopener noreferrer"&gt;rspec_junit_formatter&lt;/a&gt; - it’s a ruby gem that generates an XML report for executed tests with information about test failures. This report can be automatically read by CircleCI to present it in CircleCI web UI. No more browsing through long RSpec output - just look at highlighted failing specs in the &lt;code&gt;TESTS&lt;/code&gt; tab :)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesf2idtjfywg7cb1pbcb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesf2idtjfywg7cb1pbcb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=rspec-testing-parallel-jobs-with-circleci-and-junit-xml-report" rel="noopener noreferrer"&gt;knapsack_pro&lt;/a&gt; - it’s a Ruby gem for running tests on parallel CI jobs to ensure all jobs finish work at a similar time to save you as much time as possible and eliminate bottlenecks.

&lt;ul&gt;
&lt;li&gt;It uses the &lt;a href="https://docs.knapsackpro.com/2020/how-to-speed-up-ruby-and-javascript-tests-with-ci-parallelisation" rel="noopener noreferrer"&gt;Queue Mode to dynamically split test files between parallel jobs&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Knapsack Pro can also &lt;a href="https://knapsackpro.com/faq/question/how-to-split-slow-rspec-test-files-by-test-examples-by-individual-it?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=rspec-testing-parallel-jobs-with-circleci-and-junit-xml-report" rel="noopener noreferrer"&gt;detect your slow RSpec test files and divide them between parallel jobs by test examples&lt;/a&gt;. You don’t have to manually split your big spec file into smaller files if you want to split work between parallel container on CircleCI :)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Just add the above gems to your &lt;code&gt;Gemfile&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="ss"&gt;:test&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;gem&lt;/span&gt; &lt;span class="s1"&gt;'rspec'&lt;/span&gt;
  &lt;span class="n"&gt;gem&lt;/span&gt; &lt;span class="s1"&gt;'rspec_junit_formatter'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;

&lt;span class="n"&gt;group&lt;/span&gt; &lt;span class="ss"&gt;:test&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:development&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
  &lt;span class="n"&gt;gem&lt;/span&gt; &lt;span class="s1"&gt;'knapsack_pro'&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=rspec-testing-parallel-jobs-with-circleci-and-junit-xml-report" rel="noopener noreferrer"&gt;Knapsack Pro you will need an API token&lt;/a&gt; and you need to follow the &lt;a href="https://docs.knapsackpro.com/knapsack_pro-ruby/guide/" rel="noopener noreferrer"&gt;installation guide&lt;/a&gt; to configure your project.&lt;/p&gt;

&lt;p&gt;If you use &lt;code&gt;knapsack_pro&lt;/code&gt; gem in Queue Mode with CircleCI you may want to collect metadata like JUnit XML report about your RSpec test suite. The important step for CircleCI is to copy the XML report to &lt;code&gt;$CIRCLE_TEST_REPORTS&lt;/code&gt; directory. Below is a full config for your &lt;code&gt;spec_helper.rb&lt;/code&gt; file (&lt;a href="https://knapsackpro.com/faq/question/how-to-use-junit-formatter?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=rspec-testing-parallel-jobs-with-circleci-and-junit-xml-report#how-to-use-junit-formatter-with-knapsack_pro-queue-mode" rel="noopener noreferrer"&gt;source code from FAQ&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="c1"&gt;# spec_helper.rb or rails_helper.rb&lt;/span&gt;

&lt;span class="c1"&gt;# This must be the same path as value for rspec --out argument&lt;/span&gt;
&lt;span class="c1"&gt;# Note: the path should not contain '~' sign, for instance path ~/project/tmp/rspec.xml may not work.&lt;/span&gt;
&lt;span class="c1"&gt;# Please use full path instead.&lt;/span&gt;
&lt;span class="no"&gt;TMP_RSPEC_XML_REPORT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'tmp/rspec.xml'&lt;/span&gt;
&lt;span class="c1"&gt;# move results to FINAL_RSPEC_XML_REPORT&lt;/span&gt;
&lt;span class="c1"&gt;# so that the results won't accumulate with duplicated xml tags in TMP_RSPEC_XML_REPORT&lt;/span&gt;
&lt;span class="no"&gt;FINAL_RSPEC_XML_REPORT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'tmp/rspec_final_results.xml'&lt;/span&gt;

&lt;span class="no"&gt;KnapsackPro&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Hooks&lt;/span&gt;&lt;span class="o"&gt;::&lt;/span&gt;&lt;span class="no"&gt;Queue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;after_subset_queue&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt;&lt;span class="n"&gt;queue_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subset_queue_id&lt;/span&gt;&lt;span class="o"&gt;|&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="no"&gt;File&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exist?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;TMP_RSPEC_XML_REPORT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="no"&gt;FileUtils&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;TMP_RSPEC_XML_REPORT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;FINAL_RSPEC_XML_REPORT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need the above logic in place to move the XML report from one place to another to avoid accidentally corrupting your XML file. When Knapsack Pro in Queue Mode runs your tests then it fetches a set of test files from Knapsack Pro Queue API and runs it and generates the XML report. After that, another set of test files is fetched from Queue API and the XML report is updated on the disk. If the report already exists on the disk it can get corrupted due to overriding the same file. That’s why you need to move the file to a different location after each set of tests from Queue API is executed.&lt;/p&gt;

&lt;h2&gt;
  
  
  CircleCI YML configuration for RSpec
&lt;/h2&gt;

&lt;p&gt;Here is the complete CircleCI YML config file for RSpec, Knapsack Pro and JUnit formatter.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Ruby CircleCI 2.0 configuration file&lt;/span&gt;
&lt;span class="c1"&gt;#&lt;/span&gt;
&lt;span class="c1"&gt;# Check https://circleci.com/docs/2.0/language-ruby/ for more details&lt;/span&gt;
&lt;span class="c1"&gt;#&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;parallelism&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="c1"&gt;# https://circleci.com/docs/2.0/configuration-reference/#resource_class&lt;/span&gt;
    &lt;span class="na"&gt;resource_class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;small&lt;/span&gt;
    &lt;span class="na"&gt;docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# specify the version you desire here&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;circleci/ruby:2.7.1-node-browsers&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;PGHOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;127.0.0.1&lt;/span&gt;
          &lt;span class="na"&gt;PGUSER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_db_user&lt;/span&gt;
          &lt;span class="na"&gt;RAILS_ENV&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
          &lt;span class="c1"&gt;# Split slow RSpec test files by test examples&lt;/span&gt;
          &lt;span class="c1"&gt;# https://knapsackpro.com/faq/question/how-to-split-slow-rspec-test-files-by-test-examples-by-individual-it&lt;/span&gt;
          &lt;span class="na"&gt;KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

      &lt;span class="c1"&gt;# Specify service dependencies here if necessary&lt;/span&gt;
      &lt;span class="c1"&gt;# CircleCI maintains a library of pre-built images&lt;/span&gt;
      &lt;span class="c1"&gt;# documented at https://circleci.com/docs/2.0/circleci-images/&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;circleci/postgres:10.6-alpine-ram&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_DB&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_db_name&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
          &lt;span class="na"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_db_user&lt;/span&gt;
          &lt;span class="c1"&gt;# Rails verifies Time Zone in DB is the same as time zone of the Rails app&lt;/span&gt;
          &lt;span class="na"&gt;TZ&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Europe/Warsaw"&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;redis:6.0.7&lt;/span&gt;

    &lt;span class="na"&gt;working_directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/repo&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;TZ&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Europe/Warsaw"&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;checkout&lt;/span&gt;

      &lt;span class="c1"&gt;# Download and cache dependencies&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;restore_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;v2-dependencies-bundler-{{ checksum "Gemfile.lock" }}-{{ checksum ".ruby-version" }}&lt;/span&gt;
          &lt;span class="c1"&gt;# fallback to using the latest cache if no exact match is found&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;v2-dependencies-bundler-&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;install ruby dependencies&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;bundle install --jobs=4 --retry=3 --path vendor/bundle&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;save_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./vendor/bundle&lt;/span&gt;
          &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v2-dependencies-bundler-{{ checksum "Gemfile.lock" }}-{{ checksum ".ruby-version" }}&lt;/span&gt;

      &lt;span class="c1"&gt;# Database setup&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bin/rails db:prepare&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;run tests&lt;/span&gt;
          &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;export CIRCLE_TEST_REPORTS=/tmp/test-results&lt;/span&gt;
            &lt;span class="s"&gt;mkdir $CIRCLE_TEST_REPORTS&lt;/span&gt;
            &lt;span class="s"&gt;bundle exec rake "knapsack_pro:queue:rspec[--format documentation --format RspecJunitFormatter --out tmp/rspec.xml]"&lt;/span&gt;

      &lt;span class="c1"&gt;# collect reports&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;store_test_results&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/test-results&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;store_artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/tmp/test-results&lt;/span&gt;
          &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test-results&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;You’ve just learned how to make your CircleCI builds way faster! Now your RSpec tests can be automatically run on many parallel machines to save you time. Please let us know if it was helpful or if you have any questions. Feel free to &lt;a href="https://knapsackpro.com/?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=rspec-testing-parallel-jobs-with-circleci-and-junit-xml-report" rel="noopener noreferrer"&gt;sign up at Knapsack Pro&lt;/a&gt; or down below and try it yourself.&lt;/p&gt;




&lt;p&gt;Originally published at &lt;a href="https://docs.knapsackpro.com/2021/rspec-testing-parallel-jobs-with-circleci-and-junit-xml-report" rel="noopener noreferrer"&gt;docs.knapsackpro.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>testing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to run too slow RSpec test file in parallel on Github Actions - programmatically divide spec file by test examples</title>
      <dc:creator>Artur Trzop</dc:creator>
      <pubDate>Mon, 29 Jun 2020 09:43:24 +0000</pubDate>
      <link>https://dev.to/arturt/how-to-run-too-slow-rspec-test-file-in-parallel-on-github-actions-programmatically-divide-spec-file-by-test-examples-1pf2</link>
      <guid>https://dev.to/arturt/how-to-run-too-slow-rspec-test-file-in-parallel-on-github-actions-programmatically-divide-spec-file-by-test-examples-1pf2</guid>
      <description>&lt;p&gt;Splitting your CI build jobs between multiple machines running in parallel is a great way to make the process fast, which results in more time for building features. Github Actions allows running parallel jobs easily. In a previous article, we explained how you can use Knapsack Pro to &lt;a href="https://docs.knapsackpro.com/2019/how-to-run-rspec-on-github-actions-for-ruby-on-rails-app-using-parallel-jobs" rel="noopener noreferrer"&gt;split your RSpec test files efficiently between parallel jobs on GitHub Actions&lt;/a&gt;. Today we'd like to show how to address the problem of slow test files negatively impacting the whole build times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8qdjj84byc6xcajh3i0s.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8qdjj84byc6xcajh3i0s.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Consider the split
&lt;/h2&gt;

&lt;p&gt;Imagine you have a project with 30 RSpec spec files. Each file contains multiple test examples (RSpec "&lt;code&gt;it&lt;/code&gt;s"). Most of the files are fairly robust, fast unit tests. Let's say there are also some slower files, like feature specs. Perhaps one such feature spec file takes approximately 5 minutes to execute.&lt;/p&gt;

&lt;p&gt;When we run different spec files on different parallel machines, we strive for similar execution time on all of them. In a described scenario, even if we run 30 parallel jobs (each one running just one test file), the 5 minute feature spec would be a bottleneck of the whole build. 29 machines may finish their work in a matter of seconds, but the whole build won't be complete until the 1 remaining node finishes executing its file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Divide and conquer
&lt;/h2&gt;

&lt;p&gt;To solve the problem of a slow test file, we need to split what's inside it. We could refactor it and ensure the test examples live in separate, smaller test files. There are two problems with that though:&lt;/p&gt;

&lt;p&gt;First, it needs work. Although admittedly quite plausible in a described scenario, in real life it's usually not just the one file that's causing problems. Oftentimes there is a number of slow and convoluted test files, with their own complex setups, like nested &lt;code&gt;before&lt;/code&gt; blocks, &lt;code&gt;let&lt;/code&gt;s, etc. We've all seen them (and probably contributed to them ending-up this way), haven't we? ;-) Refactoring files like that is no fun, and there seem to always be more higher prio work to be done, at least from our experience.&lt;/p&gt;

&lt;p&gt;Second, we belive that the code organization should be based on other considerations. How you create your files and classes is most likely a result of following some approach agreed upon in your project. Dividing classes into smaller ones so that the CI build can run faster encroaches on your conventions. It might be more disturbing to some than the others, but we feel it's fair to say it'd be best to avoid - if there was a better way to achieve the same...&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter split by test examples
&lt;/h2&gt;

&lt;p&gt;As you certainly know, RSpec allows us to run individual examples instead of whole files. We decided to take advantage of that, and solve the problem of bottleneck test files by gathering information about individual examples from such slower files. Such examples are then dynamically distributed between your parallel nodes and run individually, so no individual file can be a bottleneck for the whole build. What's important, no additional work is needed - this is done automatically by the &lt;code&gt;knapsack_pro&lt;/code&gt; gem. Each example is run in its correct context that's set-up exactly the same as if you had run the whole file.&lt;/p&gt;

&lt;p&gt;If you are already using &lt;a href="https://knapsackpro.com?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=how-to-run-slow-rspec-files-on-github-actions-with-parallel-jobs-by-doing-an-auto-split-of-the-spec-file-by-test-examples" rel="noopener noreferrer"&gt;Knapsack Pro&lt;/a&gt; in queue mode, you can enable this feature just by adding one ENV variable to your GitHub Actions workflow config: &lt;code&gt;KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES: true&lt;/code&gt; (please make sure you're running the newest version of &lt;code&gt;knapsack_pro&lt;/code&gt; gem). After a few runs, Knapsack Pro will start automatically splitting your slowest test files by individual examples.&lt;/p&gt;

&lt;p&gt;Here's a full example GitHub Actions workflow config for a Rails project using RSpec:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .github/workflows/main.yaml

name: Main

on: [push]

jobs:
  test:
    runs-on: ubuntu-latest

    # If you need DB like PostgreSQL, Redis then define service below.
    # https://github.com/actions/example-services/tree/master/.github/workflows
    services:
      postgres:
        image: postgres:10.8
        env:
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: ""
          POSTGRES_DB: postgres
        ports:
          - 5432:5432
        # needed because the postgres container does not provide a healthcheck
        # tmpfs makes DB faster by using RAM
        options: &amp;gt;-
          --mount type=tmpfs,destination=/var/lib/postgresql/data
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
      redis:
        image: redis
        ports:
          - 6379:6379
        options: --entrypoint redis-server

    # https://help.github.com/en/articles/workflow-syntax-for-github-actions#jobsjob_idstrategymatrix
    strategy:
      fail-fast: false
      matrix:
        # Set N number of parallel jobs you want to run tests on.
        # Use higher number if you have slow tests to split them on more parallel jobs.
        # Remember to update ci_node_index below to 0..N-1
        ci_node_total: [8]
        # set N-1 indexes for parallel jobs
        # When you run 2 parallel jobs then first job will have index 0, the second job will have index 1 etc
        ci_node_index: [0, 1, 2, 3, 4, 5, 6, 7]

    steps:
      - uses: actions/checkout@v2

      - name: Set up Ruby
        uses: actions/setup-ruby@v1
        with:
          ruby-version: 2.6

      - uses: actions/cache@v2
        with:
          path: vendor/bundle
          key: ${{ runner.os }}-gems-${{ hashFiles('**/Gemfile.lock') }}
          restore-keys: |
            ${{ runner.os }}-gems-
      - name: Bundle install
        env:
          RAILS_ENV: test
        run: |
          bundle config path vendor/bundle
          bundle install --jobs 4 --retry 3
      - name: Create DB
        env:
          # use localhost for the host here because we have specified a container for the job.
          # If we were running the job on the VM this would be postgres
          PGHOST: localhost
          PGUSER: postgres
          RAILS_ENV: test
        run: |
          bin/rails db:prepare
      - name: Run tests
        env:
          PGHOST: localhost
          PGUSER: postgres
          RAILS_ENV: test
          KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC: ${{ secrets.KNAPSACK_PRO_TEST_SUITE_TOKEN_RSPEC }}
          KNAPSACK_PRO_CI_NODE_TOTAL: ${{ matrix.ci_node_total }}
          KNAPSACK_PRO_CI_NODE_INDEX: ${{ matrix.ci_node_index }}
          KNAPSACK_PRO_FIXED_QUEUE_SPLIT: true
          KNAPSACK_PRO_RSPEC_SPLIT_BY_TEST_EXAMPLES: true
          KNAPSACK_PRO_LOG_LEVEL: info
        run: |
          bundle exec rake knapsack_pro:queue:rspec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find more details in the video below:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/N7i2FF0DSIw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Let us know in the comments what you think about this solution. If you'd like to give this setup a try, you can also consult our FAQ entry explaining &lt;a href="https://knapsackpro.com/faq/question/how-to-split-slow-rspec-test-files-by-test-examples-by-individual-it?utm_source=devto&amp;amp;utm_medium=blog_post&amp;amp;utm_campaign=how-to-run-slow-rspec-files-on-github-actions-with-parallel-jobs-by-doing-an-auto-split-of-the-spec-file-by-test-examples" rel="noopener noreferrer"&gt;how to split slow RSpec test files&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As always, don't hesitate to ask questions if you encounter any troubles with configuring GitHub Actions in your project - we'd be happy to help!&lt;/p&gt;




&lt;p&gt;Originally published at: &lt;a href="https://docs.knapsackpro.com/2020/how-to-run-slow-rspec-files-on-github-actions-with-parallel-jobs-by-doing-an-auto-split-of-the-spec-file-by-test-examples" rel="noopener noreferrer"&gt;https://docs.knapsackpro.com/2020/how-to-run-slow-rspec-files-on-github-actions-with-parallel-jobs-by-doing-an-auto-split-of-the-spec-file-by-test-examples&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
    </item>
  </channel>
</rss>
