<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jc</title>
    <description>The latest articles on DEV Community by Jc (@jcw).</description>
    <link>https://dev.to/jcw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jcw"/>
    <language>en</language>
    <item>
      <title>How We Sped Up Rubocop Linting in our CI by 22x</title>
      <dc:creator>Jc</dc:creator>
      <pubDate>Fri, 12 May 2023 18:23:53 +0000</pubDate>
      <link>https://dev.to/jobber/how-we-sped-up-rubocop-linting-in-our-ci-by-22x-3cme</link>
      <guid>https://dev.to/jobber/how-we-sped-up-rubocop-linting-in-our-ci-by-22x-3cme</guid>
      <description>&lt;p&gt;At Jobber, we have been utilizing the GitHub merge queue as a way to run additional checks on code that is about to be merged - and we want this merge queue step to be fast (the target is under five minutes).&lt;/p&gt;

&lt;p&gt;We realized it would be very useful to have our Rubocop linting run in the merge queue, particularly when there were rule changes or new custom rules added. The problem is that the linting step takes nearly 7 minutes to run on our largest codebase- much too long for our merge queue target.&lt;/p&gt;

&lt;h2&gt;
  
  
  Investigating Caching
&lt;/h2&gt;

&lt;p&gt;The way Rubocop was being invoked in CI was with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;bundle &lt;span class="nb"&gt;exec &lt;/span&gt;rubocop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But what about caching? Without explicit management of data from previous jobs, Rubocop would be starting from scratch on every CI run. Does it support caching, and could we leverage that?&lt;/p&gt;

&lt;p&gt;It turns out that Rubocop actually &lt;a href="https://docs.rubocop.org/rubocop/usage/caching.html" rel="noopener noreferrer"&gt;has a solid caching implementation&lt;/a&gt; that takes care of all the heavy lifting, including cache invalidation:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Later runs will be able to retrieve this information and present the stored information instead of inspecting the file again. This will be done if the cache for the file is still valid, which it is if there are no changes in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;the contents of the inspected file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;RuboCop configuration for the file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the options given to rubocop, with some exceptions that have no bearing on which offenses are reported&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;the Ruby version used to invoke rubocop&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;version of the rubocop program (or to be precise, anything in the source code of the invoked rubocop program)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;The cache is automatically pruned based on file count:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Each time a file has changed, its offenses will be stored under a new key in the cache. This means that the cache will continue to grow until we do something to stop it. The configuration parameter AllCops: MaxFilesInCache sets a limit, and when the number of files in the cache exceeds that limit, the oldest files will be automatically removed from the cache.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is amazing - a well thought-out cache invalidation strategy! The second point related to file changes getting stored under a new key doesn’t really help us though - the CI cache mechanism is immutable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveraging Rubocop Caching in CI
&lt;/h2&gt;

&lt;p&gt;We can’t directly ask Rubocop what it’s going to do ahead of time (there’s no API for its caching behavior), so how do we deterministically generate a cache key for our immutable cross-workflow cache that changes in lock-step with Rubocop’s cache invalidation logic?&lt;/p&gt;

&lt;h3&gt;
  
  
  Periodic Invalidation
&lt;/h3&gt;

&lt;p&gt;Can we side-step that problem and just re-generate the cache periodically? Maybe daily, or weekly, and re-use it across all CI runs? Sure! That would certainly help - but it has the following limitations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Cache hits decrease over time as files are modified. Probably not a problem unless a large swathe of the codebase is modified within the cache period (something like a linting autofix, or a refactor / rename).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If Rubocop decides to invalidate the cache, you’ll be right back to full-length linting durations until the next cache period occurs. The most common trigger for this is a change to Rubocop configuration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The first run after each cache period will be full-length.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Shortening the cache period to mitigate some of the above issues has the side effect of increasing the amount of cache storage consumed by your project.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Intelligent Dynamic Invalidation
&lt;/h3&gt;

&lt;p&gt;What if we could integrate Rubocop’s internal cache invalidation logic with the CI’s cache invalidation logic? The limitations turn into a single bullet point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cache hits decrease over time as files are modified. Probably not a problem unless a large swathe of the codebase is modified within the cache period (something like a linting autofix, or a refactor / rename).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that the CI service will typically expire a cache after a maximum number of days. In our case this happens every 15 days, and so there is a natural “reset” that catches the slow cache hit decline over time as files are modified.&lt;/p&gt;

&lt;p&gt;Here's how Jobber is powering our CI cache invalidation with Rubocop’s logic!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Before you restore the rubocop cache directory (&lt;code&gt;~/.cache/rubocop_cache&lt;/code&gt;), lint a single dedicated file using the exact same command and configuration that the full linting step uses.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Inspect what Rubocop wrote into the cache directory, and generate your cache key as a hash of that information - at this point, proceed with the normal restore, run, persist pattern.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s how you get the text you want to hash - assuming you used a file that is highly unlikely to change for your detection, this essentially represents a Rubocop cache key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;find ~/.cache/rubocop_cache &lt;span class="nt"&gt;-type&lt;/span&gt; f
/home/circleci/.cache/rubocop_cache/c21eac4b5c1ceb0445943396a341eadb756f46cf/7a1221dfb74d1bb683162bcc22951148cd32f1c9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output that to a file (&lt;code&gt;rubocop_cache_key&lt;/code&gt;) and hash it, combine it with other environment keys, and you get a robust cache key!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpqcq0wf0og7ux386wza.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpqcq0wf0og7ux386wza.png" alt="Cache key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example cache key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rubocop-v1-{{ arch }}-ruby_&amp;lt;&amp;lt; pipeline.parameters.ruby_version &amp;gt;&amp;gt;-{{ checksum "rubocop_cache_key" }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cache Key Part&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;rubocop&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The descriptor of the cache key - this one is intended to be unique for rubocop purposes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;v1&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A manual version number - bump this up when there’s unexpected issues and you want a straight-forward way to explicitly invalidate the cache.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;{{ arch }}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CircleCI notation for the architecture, such as &lt;code&gt;arch1-linux-amd64-6_85&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ruby_&amp;lt;&amp;lt; pipeline.parameters.ruby_version &amp;gt;&amp;gt;&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;The ruby version - don’t try and share caches across ruby versions. Rubocop would almost certainly invalidate the cache in this case as well, but in our case, our setup workflow detects the Ruby version and passes it onwards as a pipeline parameter so we might as well bake it in.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;{{ checksum "rubocop_cache_key" }}&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;This is both the “intelligent” and the “dynamic” part - it builds on the intelligent Rubocop invalidation logic, and is dynamic because this isn’t hashing text directly under source control. See the examples below for how to generate the &lt;code&gt;rubocop_cache_keyfile&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Putting It All Together
&lt;/h2&gt;

&lt;p&gt;So now we have a suitable cache key - what does it look like used in a CircleCI workflow (the following is a partial example of a CircleCI configuration file)?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;references&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;detect_rubocop_cache_key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;detect_rubocop_cache_key&lt;/span&gt;
    &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Detect rubocop cache key&lt;/span&gt;
      &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bundle exec rubocop example.rb &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;&amp;amp; find ~/.cache/rubocop_cache -type f &amp;gt; rubocop_cache_key &amp;amp;&amp;amp; cat rubocop_cache_key&lt;/span&gt;

  &lt;span class="na"&gt;restore_rubocop_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;restore_rubocop_cache&lt;/span&gt;
    &lt;span class="na"&gt;restore_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Restore rubocop cache&lt;/span&gt;
      &lt;span class="na"&gt;keys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="nl"&gt;&amp;amp;rubocop_cache_key&lt;/span&gt; &lt;span class="s"&gt;rubocop-v1-{{ arch }}-ruby_&amp;lt;&amp;lt; pipeline.parameters.ruby_version &amp;gt;&amp;gt;-{{ checksum "rubocop_cache_key" }}&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;lint_rubocop&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;*bundle_install&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;*detect_rubocop_cache_key&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="nv"&gt;*restore_rubocop_cache&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;run&lt;/span&gt;
        &lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Rubocop linting&lt;/span&gt;
        &lt;span class="s"&gt;command&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bundle exec rubocop&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;save_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Save rubocop cache&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*rubocop_cache_key&lt;/span&gt;
        &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;~/.cache/rubocop_cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Note for Very Large Projects
&lt;/h3&gt;

&lt;p&gt;If your file count is close to 20k, you’ll want to tune &lt;code&gt;MaxFilesInCache&lt;/code&gt; to be your max file count plus a percentage to accommodate cache misses (files changing over time, between cache invalidations).&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Improvement Potential
&lt;/h2&gt;

&lt;p&gt;Once you’ve optimized the amount of work your CI is doing for linting, you can get further gains through parallelization of that work - either the multi-threading kind, or the horizontal scaling kind (both involve the same amount of work, but leveraging more hardware to complete that work faster - usually at a monetary cost).&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Improvement Results
&lt;/h2&gt;

&lt;p&gt;Before caching, linting took 476 seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr2ku12se4qwgamdvt70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffr2ku12se4qwgamdvt70.png" alt="Linting - before"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After caching, linting takes 22 seconds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fke4840x8bgdoexnedx8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fke4840x8bgdoexnedx8b.png" alt="Linting - after"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result (&lt;code&gt;476 / 22 = 21.6&lt;/code&gt;): &lt;strong&gt;22x faster&lt;/strong&gt; - easily fast enough to run a full linting check in our merge queue!&lt;/p&gt;

&lt;h2&gt;
  
  
  About Jobber
&lt;/h2&gt;

&lt;p&gt;Our awesome Jobber technology teams span across Payments, Infrastructure, AI/ML, Business Workflows &amp;amp; Communications. We work on cutting edge &amp;amp; modern tech stacks using React, React Native, Ruby on Rails, &amp;amp; GraphQL. &lt;/p&gt;

&lt;p&gt;If you want to be a part of a collaborative work culture, help small home service businesses scale and create a positive impact on our communities, then visit our &lt;a href="https://getjobber.com/about/careers?utm_source=devto&amp;amp;utm_medium=social&amp;amp;utm_campaign=eng_blog" rel="noopener noreferrer"&gt;careers&lt;/a&gt; site to learn more!&lt;/p&gt;

</description>
      <category>performance</category>
      <category>ruby</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Building Test Coverage Momentum</title>
      <dc:creator>Jc</dc:creator>
      <pubDate>Mon, 26 Sep 2022 16:10:44 +0000</pubDate>
      <link>https://dev.to/jobber/building-test-coverage-momentum-1gh7</link>
      <guid>https://dev.to/jobber/building-test-coverage-momentum-1gh7</guid>
      <description>&lt;p&gt;At Jobber, we established 85% as our test coverage target on two of our largest codebases in order to increase our confidence and speed in making code changes. Through a combination of automation, visibility of progress, and establishing the importance of quality, we have maintained an upwards trajectory towards that target organically for several years. Here are the steps we took to build up our test coverage momentum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Establishing the Importance of Quality
&lt;/h2&gt;

&lt;p&gt;The amount of test coverage is important to us for several reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It impacts the effectiveness of continuous regression testing&lt;/li&gt;
&lt;li&gt;It is a key factor in gaining confidence in library or framework upgrades&lt;/li&gt;
&lt;li&gt;It enables the possibility of more automation, including automated deployments&lt;/li&gt;
&lt;li&gt;It correlates to how well the tests demonstrate how to use the code&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means that most PR’s don’t just have code changes - they also include the corresponding changes to tests. &lt;/p&gt;

&lt;p&gt;Having a strong culture around pull request reviews is instrumental to this, but the key is that the importance of quality is a foundational part of the beliefs and values of the organization.&lt;/p&gt;

&lt;p&gt;An example of this is our “Quality is everyone’s responsibility“ engineering principle - part of &lt;a href="https://github.com/GetJobber#engineering-principles"&gt;a set of engineering principles&lt;/a&gt; endorsed and championed by leadership.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ratcheting
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;to proceed by steps or degrees, in one direction only&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3ymMI32W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fnq0hruns8suzlygsbv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3ymMI32W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8fnq0hruns8suzlygsbv.gif" alt="Ratchet Drawing" width="440" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;SVG: &lt;a href="https://commons.wikimedia.org/wiki/User:Xorx"&gt;Dr. Schorsch&lt;/a&gt; Animation: &lt;a href="https://commons.wikimedia.org/wiki/User:MichaelFrey"&gt;MichaelFrey&lt;/a&gt;, &lt;a href="https://commons.wikimedia.org/wiki/File:Ratchet_Drawing_Animation.gif"&gt;Ratchet Drawing Animation&lt;/a&gt;, &lt;a href="https://creativecommons.org/licenses/by-sa/3.0/legalcode" rel="license"&gt;CC BY-SA 3.0&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Building test coverage momentum on an existing codebase is more challenging compared to establishing an overall target across a brand new repository. On an existing codebase, we want to use the current coverage (whatever it is) on each file as its goal. New files use the overall goal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8KtQtxCT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lo8d8kfqalu2buicr0qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8KtQtxCT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lo8d8kfqalu2buicr0qw.png" alt="Before: Coverage goals" width="800" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Before: Attempting to set a coverage goal on an existing codebase, without a strategy to deal with the existing files&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AbmSHRO---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lgsngbf14bk0i9dw9tu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AbmSHRO---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lgsngbf14bk0i9dw9tu.png" alt="After: Coverage goals" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;After: Existing files that are below the overall goal get their own per-file goals - a starting point for ratcheting upwards!&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Knowing the goals for both the existing files and new files, we enforce them as a minimum amount of test coverage in CI. A violation of these goals will result in a CI failure with a descriptive message. Actually failing the CI is important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It provides early feedback during the lifecycle of a PR (from the first draft) that the tests need further work.&lt;/li&gt;
&lt;li&gt;It ensures that by the time the PR is approved and ready to be released, our coverage goals haven’t been compromised.&lt;/li&gt;
&lt;li&gt;It helps spread the word about the importance of quality. Learning why something failed and how to fix it is always more useful for building the culture of quality than just mentioning it in leadership presentations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach means that over time, the test coverage has nowhere to go but up - and the tooling will lock-in the gains on each file’s coverage percentage (up to the target) as the new goal!&lt;/p&gt;

&lt;p&gt;Another complexity around establishing code coverage on an existing codebase is that we already have a  test suite in place - large enough that parallelism is being leveraged on CI to keep execution times within acceptable limits. In order for ratcheting to work across the set of parallelized test runs, an independent  step in the CI collects the detailed test coverage from each run and merges them together before leveraging it for enforcing goals.&lt;/p&gt;

&lt;p&gt;Jobber has open-sourced the tool it uses to achieve this: &lt;a href="https://www.npmjs.com/package/@jobber/jest-a-coverage-slip-detector"&gt;@jobber/jest-a-coverage-slip-detector &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This library can be used to ratchet coverage in projects large or small where &lt;code&gt;jest&lt;/code&gt; is being used as the test framework, but there’s a number of design considerations that make it particularly low-friction to use in larger projects, based on the following assumptions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are unlikely to run the full set of tests locally as part of your usual development workflow, and so CI is the most reliable place to collect code coverage and check it against the per-file goals.&lt;/li&gt;
&lt;li&gt;You want your CI to be read-only. If per-file targets need to be updated, it can propose them and provided a guided experience to making the update, but the tooling shouldn’t go so far as to modify files under source control in an automated way. Ultimately you want eyes on the change before it lands, as part of your pull request workflow.&lt;/li&gt;
&lt;li&gt;You are likely leveraging parallelism in your CI (perhaps using &lt;code&gt;--shard&lt;/code&gt;) and would appreciate a library that automatically merges together the coverage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To get started, follow the &lt;a href="https://github.com/GetJobber/jest-a-coverage-slip-detector#jobberjest-a-coverage-slip-detector"&gt;installation and configuration steps&lt;/a&gt;, and then perform a first run to setup the per-file coverage goals. Note that the provided CLI supports a &lt;code&gt;--report-only&lt;/code&gt; option in case you want to start out with a soft-launch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Visibility of Progress and Celebrating Improvements
&lt;/h2&gt;

&lt;p&gt;Any team at Jobber can view a trend of the test coverage over time in our dashboarding tool. This helps teams answer questions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the test coverage trending in the right direction?&lt;/li&gt;
&lt;li&gt;Is it going in the right direction at an acceptable rate?&lt;/li&gt;
&lt;li&gt;What does the test coverage look like compared to last quarter?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, an automation reports the test coverage of a Pull Request (PR) as a comment right on the PR. The coverage details are available in an expandable summary that celebrates meeting or exceeding the target!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g0jttkYt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3ytbnpfjsf2s7u6bg9i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g0jttkYt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x3ytbnpfjsf2s7u6bg9i.png" alt="Automated Pull Request Comment" width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This automation intelligently identifies the set of code to put under test based on what files the PR is modifying - the test execution and code coverage generation part of this step is typically less than 60 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It’s never too late to add code coverage goals to an existing codebase. Invest in tooling and automation to surface successes and failures within your Pull Request workflow, and leverage ratcheting to maintain test coverage momentum.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Jobber
&lt;/h2&gt;

&lt;p&gt;Our awesome Jobber technology teams span across Payments, Infrastructure, AI/ML, Business Workflows &amp;amp; Communications. We work on cutting edge &amp;amp; modern tech stacks using React, React Native, Ruby on Rails, &amp;amp; GraphQL. &lt;/p&gt;

&lt;p&gt;If you want to be a part of a collaborative work culture, help small home service businesses scale and create a positive impact on our communities, then visit our &lt;a href="https://getjobber.com/about/careers?utm_source=devto&amp;amp;utm_medium=social&amp;amp;utm_campaign=eng_blog"&gt;careers&lt;/a&gt; site to learn more!&lt;/p&gt;

</description>
      <category>testcoverage</category>
      <category>automation</category>
      <category>workflow</category>
      <category>jest</category>
    </item>
  </channel>
</rss>
