<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Zachary Hamm</title>
    <description>The latest articles on DEV Community by Zachary Hamm (@hammzj).</description>
    <link>https://dev.to/hammzj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hammzj"/>
    <language>en</language>
    <item>
      <title>Optimizing a load balancing algorithm to minimize the runtime of a static process</title>
      <dc:creator>Zachary Hamm</dc:creator>
      <pubDate>Wed, 29 Oct 2025 19:53:23 +0000</pubDate>
      <link>https://dev.to/hammzj/optimizing-a-load-balancing-algorithm-to-minimize-the-runtime-of-a-static-process-4hcg</link>
      <guid>https://dev.to/hammzj/optimizing-a-load-balancing-algorithm-to-minimize-the-runtime-of-a-static-process-4hcg</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a followup to the following article: &lt;a href="https://dev.to/hammzj/load-balancing-cypress-tests-without-cypress-cloud-2one"&gt;Load balancing Cypress tests without Cypress Cloud&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Months ago, I built a &lt;a href="https://github.com/hammzj/cypress-load-balancer/" rel="noopener noreferrer"&gt;simple load balancer&lt;/a&gt; for the Cypress testing framework. While others exist, like the really well-designed &lt;a href="https://github.com/bahmutov/cypress-split" rel="noopener noreferrer"&gt;cypress-split&lt;/a&gt; and an existing one in paid Cypress Cloud, I set out to see how to improve and combine all the necessary commands into one package. Originally, tests were balancing using a "round-robin" algorithm, but after time, I found that the extremes between the highs and lows from the process execution times to be inefficient. Thus, I set out to improve upon this with a new approach.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Everything below defines specifically version 0.2.9 of the “cypress-load-balancer” NPM package; please note it may change over time from how it is described here.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I feel confident that the new algorithm has reduced the timings significantly while ensuring all runners are evenly distributed, and the results at the end will demonstrate as such. Here's how I used some system design techniques to accomplish and implement this enhanced change.&lt;/p&gt;

&lt;h2&gt;
  
  
  The issue with using a "round-robin" approach in this case
&lt;/h2&gt;

&lt;p&gt;A "round-robin" algorithm is a common, simple approach mainly used for handling dynamic traffic. &lt;a href="https://www.vmware.com/topics/round-robin-load-balancing" rel="noopener noreferrer"&gt;VMWare does a good explanation of it here&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In a nutshell, round robin network load balancing rotates connection requests among web servers in the order that requests are received. For a simplified example, assume that an enterprise has a cluster of three servers: Server A, Server B, and Server C.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first request is sent to Server A.&lt;/li&gt;
&lt;li&gt;The second request is sent to Server B.&lt;/li&gt;
&lt;li&gt;The third request is sent to Server C.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The load balancer continues passing requests to servers based on this order. This ensures that the server load is distributed evenly to handle high traffic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There is a stark difference between an open, dynamic process and the &lt;code&gt;cypress run&lt;/code&gt; command, however: The &lt;code&gt;cypress run&lt;/code&gt; command is a static process that has a defined end point, whereas balancing traffic usually means that the line of communication is left open intentionally. For network requests, this means the most optimization is needed in the &lt;strong&gt;present&lt;/strong&gt; load of traffic, but it does not necessarily care about the total time the whole process took if it is finally shut down. For running tests, however, we want to get tests to run as fast as possible: while improving each individual test file is an option, a second option exists to distribute them across parallel processes to minimize the amount of time it takes to fully complete testing all files. &lt;/p&gt;

&lt;p&gt;With my initial round-robin algorithm, I sorted the test files by their expected run time, and then assigned them out to each &lt;strong&gt;runner&lt;/strong&gt;, or worker process. Like the description above, if I had three runners and nine files, then the first runner gets the first file, the second gets the second file, third gets the third, and repeat. However, what this means is that since they are sorted by runtime, the first runner should always be the longest to run, and the last runner should be the shortest. The differences between these runtimes is very unbalanced: if we divide the nine files into three sets of three files each, then the first runner is always going to get the highest running time from each of those sets:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;File timings (in milliseconds) &lt;code&gt;[900, 800, 700, 600, 500, 400, 300, 200, 100]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Runners after load balancing: &lt;br&gt;
&lt;code&gt;[ [900, 600, 300], [800, 500, 200], [700, 400, 100] ]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Total run times:&lt;br&gt;
&lt;code&gt;[ 1800, 1500, 1200 ]&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Take note of the difference between the longest runner and shortest runner is 600ms. This can definitely be improved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defining a new solution
&lt;/h2&gt;

&lt;p&gt;Now, after noticing the extremes above, I wanted to restart and determine &lt;em&gt;what&lt;/em&gt; I actually needed out of load balancing. Originally, it was just to evenly distribute &lt;code&gt;X&lt;/code&gt; files between &lt;code&gt;Y&lt;/code&gt; runners. This does not really optimize for time, however. Instead, I came up with a new goal statement:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The load balancing algorithm should keep the total runtime of the Cypress test execution process as low as possible, while evenly balancing test files amongst all available runner processes allowed.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With that in mind, I now could design a real working solution. Before even defining the boundaries of the algorithm, let's take a look at what we know about the affected process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Initial observations of the Cypress &lt;code&gt;run&lt;/code&gt; command
&lt;/h3&gt;

&lt;p&gt;I collected some thoughts around Cypress and the &lt;code&gt;cypress run&lt;/code&gt; process. Here's my list I uses to define the requirements later on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cypress has both an &lt;code&gt;open&lt;/code&gt; and a &lt;code&gt;run&lt;/code&gt; process and they are distinct from each other. The &lt;code&gt;open&lt;/code&gt; process launches an interactive testrunner UI and can remain available after a test completes. The &lt;code&gt;run&lt;/code&gt; process executes a set of files provided to it and ends when it is completes test execution and any post-test events.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;cypress run&lt;/code&gt; command is &lt;strong&gt;static&lt;/strong&gt;: it has a defined start point and end point, with required inputs and various outputs.&lt;/li&gt;
&lt;li&gt;The inputs to &lt;code&gt;cypress run&lt;/code&gt; include a set of file names or file patterns to determine the testfiles to use; without a list of files (that it &lt;em&gt;can&lt;/em&gt; find within the suite), it will exit after starting up without executing any tests.&lt;/li&gt;
&lt;li&gt;Excluding setup and teardown of the process, the total &lt;strong&gt;test set&lt;/strong&gt; execution time is the accumulation of the runtimes from all test files being executed.

&lt;ul&gt;
&lt;li&gt;Furthermore, if every test could exist in a separate process run in parallel, then the minimum test set execution time of the entire process is equal to the longest-running test file. To get the lowest possible test set execution time, we must wait for the longest-running file to complete.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;When parallelized runner processes are used that can have multiple tests run, then the total execution time of that process should be kept lower than the process containing the longest-running test file.

&lt;ul&gt;
&lt;li&gt;For example, if a runner has only one file that is the longest-running file, then the accumulation of test timings across all other runners should be kept lower than the longest-running process, if possible.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;To know how long a file runs, it must be executed at least &lt;strong&gt;once&lt;/strong&gt; and have its timing recorded in a persistent location, like a statistics file that keeps a history of test file timings.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;That's a lot of observations, but they helped immensely to shape a working solution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Defining requirements of the new algorithm
&lt;/h3&gt;

&lt;p&gt;Here's where it got fun: &lt;strong&gt;how&lt;/strong&gt; do I communicate to myself what the algorithm must do? Well, based on the earlier observations, I can pick out a lot of things that are already well-defined.&lt;/p&gt;

&lt;h4&gt;
  
  
  Requirements for the load balancing function
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Load balancing only affects the execution of the &lt;strong&gt;test set&lt;/strong&gt; and does not consider any setup or teardown events, since it only records timings for test files.&lt;/li&gt;
&lt;li&gt;The algorithm must know the following inputs:

&lt;ul&gt;
&lt;li&gt;the list of files to balance&lt;/li&gt;
&lt;li&gt;a count of runners (worker processes)&lt;/li&gt;
&lt;li&gt;a way to read a statistics file containing the timings of the files. &lt;/li&gt;
&lt;li&gt;&lt;em&gt;In this case, it records the durations, average, and the median execution time of a file: my algorithm uses the &lt;strong&gt;median&lt;/strong&gt; execution time when balancing.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;If the timing of a file is not known, for instance, it is a new file, assume an initial timing as 0.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;A plugin must be registered in the Cypress &lt;code&gt;nodeSetupEvents&lt;/code&gt; hook to record the duration of each test after it has been executed, update the file statistics, and save them to a persistent statistical file. In this case, it goes to a directory of &lt;code&gt;/.cypress_load_balancer&lt;/code&gt; with a file named &lt;code&gt;spec-map.json&lt;/code&gt;. This file will be reused by the balancing function to know the timing of each file.&lt;/li&gt;

&lt;li&gt;Each runner process will only contain the name of each file it is executing.&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Requirements of the algorithm
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;It accepts a list of filenames, the runner count, and can access the statistical file, &lt;code&gt;spec-map.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;To start, get the statistics of each file, and then sort the file by &lt;strong&gt;longest to shortest runtime&lt;/strong&gt; as &lt;code&gt;sortedFiles&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Initialize &lt;code&gt;X&lt;/code&gt; runners as empty arrays &lt;code&gt;[]&lt;/code&gt;. For each runner, remove the longest running file from the &lt;code&gt;sortedFiles&lt;/code&gt; array and add it to the runner array. For example, for 3 runners, runner 1 gets the longest running file, runner 2 gets the second longest, and so on.&lt;/li&gt;
&lt;li&gt;Calculate the total timing of each runner, then save the total time of the &lt;strong&gt;largest&lt;/strong&gt; runner to a variable; this variable may be updated later. This variable will be named &lt;code&gt;highestTotalRunnerTime&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;Finally, sort the runners from &lt;strong&gt;shortest to longest runtime&lt;/strong&gt; runtime; this is  the inverse of how the files are sorted on purpose.&lt;/li&gt;
&lt;li&gt;Next, it will iterate over every runner &lt;em&gt;except for&lt;/em&gt; the largest runner. For each:

&lt;ul&gt;
&lt;li&gt;If the runner has a larger run time than the &lt;code&gt;highestTotalRunnerTime&lt;/code&gt;, skip adding any files to it in this iteration.&lt;/li&gt;
&lt;li&gt;If not, then remove the next highest file from &lt;code&gt;sortedFiles&lt;/code&gt; and add it to the runner.&lt;/li&gt;
&lt;li&gt;Repeat until:&lt;/li&gt;
&lt;li&gt;no more files remain&lt;/li&gt;
&lt;li&gt;all runners have a total time equal to or greater than the original &lt;code&gt;highestTotalRunnerTime&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;If there are more files to add and all runners now have a time equal to or higher than &lt;code&gt;highestTotalRunnerTime&lt;/code&gt;, then this process will repeat as such:

&lt;ul&gt;
&lt;li&gt;If &lt;strong&gt;all&lt;/strong&gt; runners have the exact same execution time, then: &lt;/li&gt;
&lt;li&gt;iterate over each runner and add the next highest file time; this becomes the new basis of the minimum execution time. &lt;/li&gt;
&lt;li&gt;Sort the runners from &lt;strong&gt;shortest to longest runtime&lt;/strong&gt; again.&lt;/li&gt;
&lt;li&gt;Get the largest runner and set its total execution time as the new value of &lt;code&gt;highestTotalRunnerTime&lt;/code&gt;. &lt;/li&gt;
&lt;li&gt;(&lt;em&gt;This is to prevent a deadlock state when more files remain but no runners are equal to the &lt;code&gt;highestTotalRunnerTime&lt;/code&gt;.)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Continue until all files have been placed in a runner.&lt;/li&gt;

&lt;li&gt;Return the array of runners.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  The "weighted-largest" algorithm in depth
&lt;/h2&gt;

&lt;p&gt;Here is the algorithm function in TypeScript with some additional comments to explain how it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;LoadBalancingMap&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;e2e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="na"&gt;relativeFileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;durations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
        &lt;span class="nl"&gt;average&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nl"&gt;median&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="nl"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="na"&gt;relativeFileName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;durations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;[];&lt;/span&gt;
        &lt;span class="nl"&gt;average&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="nl"&gt;median&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;balanceByWeightedLargestRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;loadBalancingMap&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LoadBalancingMap&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;testingType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TestingType&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;runnerCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;filePaths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FilePath&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;Runners&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runnerCount&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;filePaths&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

  &lt;span class="c1"&gt;//Helper methods&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FilePath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;loadBalancingMap&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;testingType&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="nx"&gt;fp&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getTotalTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FilePath&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;stats&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;median&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sortByLargestMedianTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FilePath&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
    &lt;span class="nx"&gt;fps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;])).&lt;/span&gt;&lt;span class="nf"&gt;reverse&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;//Sort **files** by highest to lowest "expected run time" (median runtime)&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sortedFilePaths&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="nf"&gt;sortByLargestMedianTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;filePaths&lt;/span&gt;&lt;span class="p"&gt;)];&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;addHighestFileToRunner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FilePath&lt;/span&gt;&lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;sortedFilePaths&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shift&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="c1"&gt;//Initialize each runner empty&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Runners&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;runnerCount&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;Runners&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;//This could be done more efficiently by using array indices alongside an array of every runners' total time,&lt;/span&gt;
  &lt;span class="c1"&gt;// instead of resorting each iteration.&lt;/span&gt;
  &lt;span class="nl"&gt;sortRunners&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sortedFilePaths&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;areAllRunnersEqualInRunTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;every&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]));&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;areAllRunnersEqualInRunTime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;//When all runners are equal in time, pop out the file with the next highest runtime for each runner&lt;/span&gt;
      &lt;span class="c1"&gt;//This will prevent a deadlock state while also keeping files evenly spread amongst runners while still balanced&lt;/span&gt;
      &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;addHighestFileToRunner&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;//Sort **runners** by lowest to highest runtime&lt;/span&gt;
    &lt;span class="nx"&gt;runners&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sort&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="c1"&gt;//Get the highest runner runtime of this iteration to compare against the other smaller runners&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;highestRunTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

    &lt;span class="c1"&gt;//"runners.length - 2" means it will not add files to the largest runner at "runners.length - 1", since we are using it as a basis for all other runners.&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sortedFilePaths&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;break&lt;/span&gt; &lt;span class="nx"&gt;sortRunners&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentRunner&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentRunnerRunTime&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getTotalTime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentRunner&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentRunnerRunTime&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="nx"&gt;highestRunTime&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nf"&gt;addHighestFileToRunner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentRunner&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sortedFilePaths&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;//Remove empty values just in case&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;runners&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;filterOutEmpties&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;Runners&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Keep in mind that this algorithm is not very optimized and can be made much better, but in this case, memory usage and &lt;code&gt;O(n)&lt;/code&gt; is not going to be a concern, even with large file sets.&lt;/p&gt;

&lt;p&gt;If we take the "weighted-largest" algorithm and compare it against the "round-robin" with the same 9 files and 3 runners, we get this output:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;File timings: &lt;code&gt;[900, 800, 700, 600, 500, 400, 300, 200, 100]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Runners after load balancing: &lt;br&gt;
&lt;code&gt;[ [800, 500, 100], [700, 600, 200], [900, 400, 300] ]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Total run times:&lt;br&gt;
&lt;code&gt;[ 1400, 1500, 1600 ]&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The highest time is only 200 ms less than the results of the “round-robin” algorithm, but the difference between the largest and smallest runner is now only 200ms as well. While this may seem like only a marginal improvement, the examples below demonstrate a more stunning improvement.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples: Comparisons of different file timing distributions
&lt;/h3&gt;

&lt;p&gt;To get an accurate idea of how these will handle different file timings, I created 8 different variations of file time distributions and compared results of the "round-robin" and "weighted-largest" algorithms.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Every file has the exact same timing
&lt;/h4&gt;

&lt;p&gt;Here there were 9 files, each having a timing of &lt;code&gt;100&lt;/code&gt;. Both algorithms ended up with the same result, which should be expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2vg0vtc0dj64swpessm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2vg0vtc0dj64swpessm.png" alt="Round Robin results when every value is the same" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1w5f5budvamje5rsn7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1w5f5budvamje5rsn7q.png" alt="weighted-largest results when every value is the same" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both had an equal distribution amongst all runners. This seems reasonable.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Bell curve distribution
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values: 
[
1, 
10, 10, 
20, 20, 20, 
30, 30, 30, 30, 
40, 40, 40, 40, 40, 
50, 50, 50, 50, 50, 50, 
60, 60, 60, 60, 60, 
70, 70, 70, 70, 
80, 80, 80, 
90, 90, 
100
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F461mat2si9ue5ga0x3tt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F461mat2si9ue5ga0x3tt.png" alt="round-robin results for bell curve distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg94fstzngz6zn632og4i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg94fstzngz6zn632og4i.png" alt="weighted-largest results for bell curve distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we can already start to see improvement. The "weighted-largest"  algorithm appropriately balanced each runner, with a spread of &lt;code&gt;1&lt;/code&gt;, whereas the "round-robin" approach has a spread of &lt;code&gt;80&lt;/code&gt;. That's pretty good so far!&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Extreme high-end distribution
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values:
[
100, 100, 
500, 500, 500, 500, 500, 500, 500, 500
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tb3rjw56ucez3qs1w3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tb3rjw56ucez3qs1w3f.png" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6j758gbarnlov03dswy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6j758gbarnlov03dswy.png" alt="weighted-largest results for extreme high-end distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I used only two runners for this trial. The results of both algorithms were equal in balancing abilities. This looks reasonable.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Extreme low-end distribution, version 1: total sum of low values equals single high value
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values:
[
100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 
1000
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcbt5nyaa4qr2zun9d3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcbt5nyaa4qr2zun9d3a.png" alt="round-robin results for extreme low-end distribution, where summation of low values is equal to highest value" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm189vaf38s0onah4g2oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm189vaf38s0onah4g2oo.png" alt="weighted-largest results for extreme low-end distribution, where summation of low values is equal to highest value&amp;lt;br&amp;gt;
" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wow! We can totally see improvements with the "weighted-largest" algorithm. Both of its runners have equal time, where as the "round-robin" approach is incredibly unbalanced, with a difference of &lt;code&gt;1000&lt;/code&gt;  total time between its runners.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Extreme low-end distribution, version 2: total sum of low values is greater than single high value
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values:
[
100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100,
1000
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz3cr58yx64ctngzv677.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmz3cr58yx64ctngzv677.png" alt="round-robin results for extreme low-end distribution, where summation of low values is greater than the highest value" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1w7nu8zb2627hghpo5o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1w7nu8zb2627hghpo5o.png" alt="weighted-largest results for extreme low-end distribution, where summation of low values is greater than the highest value" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is not much difference from the first low-end distribution trial. The "weighted-largest" algorithm produces better results.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Extreme center distribution
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values:
[
10, 
20, 
30, 
40, 
50, 50, 50, 50, 50, 50, 
60, 
70, 
80, 
90, 
100
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8irsg50kxl7oigrpsma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8irsg50kxl7oigrpsma.png" alt="round-robin results for extreme center distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscn5blg0d6qf07azvcz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flscn5blg0d6qf07azvcz.png" alt="weighted-largest results for extreme center distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What is most interesting here is that it is much easier to visualize how the  "round-robin" algorithm handles each value, where each runner's runtime is shown in a descending order. Still, the "weighted-largest" algorithm produces very even results.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Extreme "ends" distribution
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values:
[
10, 10, 10, 10,
20,
30, 
40, 
50, 
60,
70, 
80, 
90,
100, 100, 100, 100
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwa1gjykw63vvr002cxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwa1gjykw63vvr002cxl.png" alt="round-robin results for extreme " width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ef7xzimch4htngds7h6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ef7xzimch4htngds7h6.png" alt="weighted-largest results for extreme " width="600" height="371"&gt;&lt;/a&gt;
"/&amp;gt;&lt;/p&gt;

&lt;p&gt;Again, the "round-robin" approach is easy to visualize in a descending order. The "weighted-largest" algorithm has much more even results.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example: Uniform value distribution
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values:
[
100, 100, 100, 100, 100, 100, 
200, 200, 200, 200, 200, 200
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flz3j34od1n0v9goh0b3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flz3j34od1n0v9goh0b3r.png" alt="round-robin results for uniform value distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj6mjqkyi30jh7gw9dar.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj6mjqkyi30jh7gw9dar.png" alt="weighted-largest results for uniform value distribution" width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Like the very first trial with all timings equal to one another, both algorithms will balance exactly the same in this case.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusion from the initial trials
&lt;/h3&gt;

&lt;p&gt;These results definitely show the "weighted-largest" algorithm to be much more optimized in terms of balancing files appropriately. Not only are each runner more evenly distributed in timing, but the longest runtime is kept as low as possible as well. However, one thing to consider is that a lot of these values do not have much variance from one another; most are in steps of 10 or 100 away from each other. So, I decided to try this with some real-world results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Examples: Real world trials
&lt;/h3&gt;

&lt;h4&gt;
  
  
  130 component tests
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Skipping showing the values as there are too many, but the longest running file takes 3 minutes, and most take under 10 seconds. There is a greater variance of timings across all files in this example.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs51xindc3btux3wrszu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvs51xindc3btux3wrszu.png" alt="round-robin results for a component test set" width="554" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztf9wranot2hvpa0xxtp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztf9wranot2hvpa0xxtp.png" alt="weighted-largest results for a component test set" width="554" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this trial, there were 130 tests across 6 runners. There is a stark difference between both algorithms, with the "weighted-largest" being the more effective solution here. All of its jobs take under 4 minutes, as compared to the longest job of the other algorithm being over &lt;strong&gt;9&lt;/strong&gt; minutes!&lt;/p&gt;

&lt;h4&gt;
  
  
  45 end-to-end tests
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Values (in minutes):
[
  0.01, 0.01, 0.01, 0.01, 0.01, 0.06,
  0.14, 1.22, 1.31, 1.76, 1.81, 2.01,
  2.22, 2.31, 2.32, 2.32, 2.42, 3.15,
  3.19, 3.28, 3.32, 3.41, 3.68, 4.33,
  4.86, 5.09, 5.1, 7.05, 8.37, 9.01,
  9.99, 10.06, 11.95, 12.75, 13.18, 13.39,
  15.69, 17.21, 19.19, 20.03, 20.62, 20.75,
  22.45, 45.08, 46.21
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uqhcags5968tkzqcl26.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uqhcags5968tkzqcl26.png" alt="round-robin results for an end-to-end test set" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyzw980em531veoen4cv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyzw980em531veoen4cv.png" alt="weighted-largest results for an end-to-end test set" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The end-to-end tests have a lot more variance between file timings. The longest running test has a median runtime of over 45 minutes!! &lt;/p&gt;

&lt;p&gt;In this trial, I used 16 runners as this is more appropriate in an actual organization. The great thing is that the "weighted-largest" was able to keep that single file in its own runner, and all other runners were kept lower than it. I believe this to be a huge success.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional considerations and final conclusions
&lt;/h2&gt;

&lt;p&gt;This exercise was really refreshing to me -- it was great to actually see improvement over a common solution by readdressing the actual goal. For static processes, it makes much more sense to balance all runners to stay below the limit of the largest runner, since ultimately the total execution time is dependent on the total runtime of any one of its parallel processes. The "weighted-largest" algorithm feels much more fitting as a solution here.&lt;/p&gt;




&lt;p&gt;As a final note, I wanted to compare the results versus the &lt;strong&gt;"cypress-split"&lt;/strong&gt; package as well, so I took the component tests and split them amongst 6 runners there as well. The results are nearly identical across both packages!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztf9wranot2hvpa0xxtp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztf9wranot2hvpa0xxtp.png" alt="weighted-largest results for a component test set" width="554" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqtm9q4i19653l6fdhzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzqtm9q4i19653l6fdhzx.png" alt="cypress-split results for a component test set" width="554" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you'd like to add this package to your Cypress suite, see it here!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/hammzj/cypress-load-balancer" rel="noopener noreferrer"&gt;https://github.com/hammzj/cypress-load-balancer&lt;/a&gt;&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>testing</category>
      <category>node</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How I perform code reviews</title>
      <dc:creator>Zachary Hamm</dc:creator>
      <pubDate>Tue, 01 Jul 2025 18:47:09 +0000</pubDate>
      <link>https://dev.to/hammzj/how-i-perform-code-reviews-1gd7</link>
      <guid>https://dev.to/hammzj/how-i-perform-code-reviews-1gd7</guid>
      <description>&lt;p&gt;In the distant past, I had been a bit too quick with how I checked pull requests: I’d read the description and then look over each file in order as presented in the changelist. An approach like that is only partially effective. Over time, I’ve learned how to collect my thoughts and separate items by importance. A certain conclusion must be drawn when conducting a code review: &lt;em&gt;“The system can accept these changes as currently presented, and we can handle any complexity that may arise from them. Therefore, I approve.”&lt;/em&gt; To get to this state, I put together a little guide to organize reviews and become more efficient with them.&lt;/p&gt;

&lt;p&gt;When requested for a code review, I manage it under three principles in this order: &lt;strong&gt;context&lt;/strong&gt;, &lt;strong&gt;affordability&lt;/strong&gt;, and &lt;strong&gt;standardization&lt;/strong&gt;. If I find issues that require new changes, I let the developer know and only continue to the next stage if any additional suggestions would remain applicable. Let’s look at each below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context&lt;/strong&gt; deals with the meaning behind the changes proposed: &lt;em&gt;"Can I understand the &lt;strong&gt;purpose&lt;/strong&gt; from how code translates into written requirements?"&lt;/em&gt; Better yet, does the code correctly translate into a description of them? This is the first thing I look for when beginning a review. It is most useful for anyone who has knowledge of the system but needs to understand the reasoning for new changes.&lt;/p&gt;

&lt;p&gt;Discovering the context will communicate the “chain of command” within the code: I can see how each changed file connects to one another, from high-levels down into deeper system logic. This helps detect glaring problems or “miscommunications” very early on. If any exist, then the review can be pushed back early to the developer as it likely means the code isn’t doing what it &lt;em&gt;says&lt;/em&gt; it should be doing.&lt;/p&gt;

&lt;p&gt;How do I try to discover the context? Start reading the files closest to the user or end state of the application, and move downwards to files closer to the internal system logic. A change is more likely to have a more noticeable impact the closer it is to the user or end process. As I trace the logic from those higher order functions down to lower order ones, I can map out the changes more effectively. Context is drawn from seeing how the technical implementation translates between its written communication of changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Affordability
&lt;/h2&gt;

&lt;p&gt;It's easy to think that affordability means the monetary cost...but not quite. &lt;strong&gt;Affordability&lt;/strong&gt; deals with total cost of maintaining the changes in the long term, including everything from data collisions, performance, security, scalability, and quality. It limits overall complexity by identifying risks. &lt;em&gt;“How much time and energy do we need to spend on corrections if this code behaves incorrectly?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Understanding the cost distinguishes those who can anticipate future issues. They can stay one step ahead of the system by having greater knowledge of the full architecture and language paradigms. While I may not have deep knowledge of every possible impact, I can manage what I do know. This principle is necessary for everyone to learn, as it lessens knowledge gaps and opens discussion around possibilities for decay. It takes a village.&lt;/p&gt;

&lt;p&gt;Balancing affordability also means finding compromise with the code. It is not about finding things to prevent approval, but instead, is finding solutions that limit unexpected behaviors. Sometimes follow-up work is required when the feature is more important than the assumptions. Sometimes, the work requires additional shaping to be safe. Ask yourself, &lt;em&gt;“Can we afford to manage the identified risks?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;How is affordability uncovered? This part of the process is difficult since it requires delicacy with sufficiently understanding the changes &lt;em&gt;to&lt;/em&gt; the system, but can be summed up with few small statements: Understand the context first. Study the system. Read up on the languages used and their own paradigms. Map out the changes the architecture, low to high level. Learn about consequences of any new implementations and dependencies.&lt;/p&gt;

&lt;p&gt;Lastly, check for tests as they protect against undesired costs and report on unwanted behaviors. Tests document your system and ensure that previous behaviors remain stable. Advocate for &lt;strong&gt;quality&lt;/strong&gt;, so even when you don't know the entire system, ensure an agreeable level of testing occurred against the new changes, and call out untested behaviors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Standardization
&lt;/h2&gt;

&lt;p&gt;These checks are the most noticeable within reviews, since &lt;strong&gt;standardization&lt;/strong&gt; deals with agreements on following established code patterns and paradigms. Most standardization issues can be automatically detected through tools like linters and type checkers. However, suggestions here can get a bit opinionated when tools or  prop standards for patterns are not already established for the system. &lt;/p&gt;

&lt;p&gt;For instance, I may find that a functional programming approach may work better than using long &lt;code&gt;for/each&lt;/code&gt; loops, or that case statements are easier to read over complex &lt;code&gt;if/else&lt;/code&gt; blocks, or that certain method names are awkward. Others may have differing opinions. &lt;/p&gt;

&lt;p&gt;This is why standardization is the least important part of my pull request review: it can be automated or is initially agreed upon as a "contract" when making a change, but anything else depends on managing opinions. Having written prop standards removes bias and reduces variance in the code: in essence, they're similar to requirements themselves. &lt;/p&gt;

&lt;p&gt;After I understand the context and affordability, I then scan the files more thoroughly for standardization quirks. Cleaning up code is a chore, or task, that can be done once the higher priorities are discussed.&lt;/p&gt;

&lt;p&gt;I still find this important to do, though, as having code that can look the same throughout the suite makes it easier to manage, read, and document. It creates a visual language that describes the system. Variance and customization should be necessary when standards do not exist for describing a given code style, or other approaches have been found to be less effective.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;One last note is that I don’t &lt;em&gt;need&lt;/em&gt; to end a review as soon as I find an issue with the context or affordability, unless it is considerably large and may require reorganization. Instead, I will still read over and try to clean up or make suggestions along the way. Some suggestions don’t need to hold up an approval, either, so it is wise to make calls based on what is necessary and what is nice-to-have. I’ve been trying to adapt &lt;a href="https://conventionalcomments.org/" rel="noopener noreferrer"&gt;conventional comments&lt;/a&gt; a bit more, but without them sounding too sterile. It’s ok to leave a bit of personality in your comment style.&lt;/p&gt;




&lt;p&gt;Pull request reviews come down to looking at these three principles in order: defining the &lt;strong&gt;context&lt;/strong&gt; behind the changes proposed and how they translate into a written description, the &lt;strong&gt;affordability&lt;/strong&gt; of the total cost of maintaining the changes in the long term, and ensuring &lt;strong&gt;standardization&lt;/strong&gt; on following established code patterns and paradigms. Doing this has helped me become more efficient in understanding the system through the changes made to it.&lt;/p&gt;

&lt;p&gt;I'm always curious how other people handle code reviews. Please share your approaches as I am happy to answer!&lt;/p&gt;

</description>
      <category>community</category>
      <category>github</category>
      <category>learning</category>
    </item>
    <item>
      <title>Compare changes to encrypted files without revealing secrets in a GitHub Actions pull request workflow!</title>
      <dc:creator>Zachary Hamm</dc:creator>
      <pubDate>Wed, 09 Apr 2025 18:03:02 +0000</pubDate>
      <link>https://dev.to/hammzj/compare-changes-to-encrypted-files-without-revealing-secrets-in-a-github-actions-pull-request-4kij</link>
      <guid>https://dev.to/hammzj/compare-changes-to-encrypted-files-without-revealing-secrets-in-a-github-actions-pull-request-4kij</guid>
      <description>&lt;p&gt;Recently I've worked on something I can describe as a very fun solution that has relieved much stress. &lt;/p&gt;

&lt;p&gt;On a project with frequent updates to encrypted environment files, it became tedious to review pull requests with changes to these files. We needed to manually verify changes locally, and some of them could be missed since GitHub can't show diffs on encrypted files. As a remedy, I thought, "why not just diff which &lt;em&gt;keys&lt;/em&gt; were changed within the file?" And luckily, we can compare files between branches with GitHub Actions!&lt;/p&gt;

&lt;p&gt;The workflow below will demonstrate an example of how this is possible using the exciting power of the &lt;code&gt;pull_request&lt;/code&gt; trigger: it be set to run &lt;em&gt;only&lt;/em&gt; when certain files are changed in a PR!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem statement:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I need to compare changes made to an encrypted file within a pull request. The file is stored as key-value pairs of information, similar to a flat JSON structure, or an &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;The solution:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;During a pull request, use a GitHub Actions workflow to post a comment that displays which keys were changed without outputting any sensitive data.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Building a GitHub Workflow
&lt;/h3&gt;

&lt;h3&gt;
  
  
  The process
&lt;/h3&gt;

&lt;p&gt;A pull request is opened that contains edits to an encrypted file, which triggers the workflow. It then will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check out the repository from the current (source) branch&lt;/li&gt;
&lt;li&gt;In a subdirectory, check out the repository from the target branch (where you want to merge)&lt;/li&gt;
&lt;li&gt;With a custom-built JavaScript command, decrypt the files using a stored repository secret, and then perform a diff on them to detect &lt;strong&gt;Additions&lt;/strong&gt;, &lt;strong&gt;Deletions&lt;/strong&gt;, and &lt;strong&gt;Modifications&lt;/strong&gt;. &lt;em&gt;More on this below!&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Report back &lt;strong&gt;only non-sensitive key names&lt;/strong&gt; of anything changed as a new JSON object&lt;/li&gt;
&lt;li&gt;Share the JSON object to a bot that can post a comment on the pull request of those changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's get to work!&lt;/p&gt;

&lt;h3&gt;
  
  
  Set up the workflow file
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The trigger
&lt;/h4&gt;

&lt;p&gt;For this workflow, we are going to assume changes are being made to &lt;code&gt;.env&lt;/code&gt; files, which are text files of key-value pairs of information. They look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXAMPLE_KEY=value
MY_SECRET=abcd1234!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Whenever this file is updated and committed back to the repository, we want the workflow to run and verify what keys were changed.&lt;/p&gt;

&lt;p&gt;Something really really cool with workflow triggers is that can be set to &lt;a href="https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#onpushpull_requestpull_request_targetpathspaths-ignore" rel="noopener noreferrer"&gt;run against individual file or directory changes within pull requests&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.env.enc'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the above, the workflow will only run when 1. the action is a pull request, and 2. a change happens to the &lt;code&gt;.env.enc&lt;/code&gt; file at the root of the project directory.&lt;/p&gt;

&lt;p&gt;Next, two jobs will be set up. The first will handle producing the diff based on changes to that file. The second will post a formatted comment on the PR based on the diff.&lt;/p&gt;

&lt;h4&gt;
  
  
  The first job: &lt;strong&gt;get-file-differences&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This job will checkout both the current branch's full repository, and then only the encrypted environment file from the base branch. Then, it will use a custom JavaScript command to decrypt, diff the files, and then provide outputs back to the workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;get-file-differences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.produce-diff.outputs.message }}&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# Use the current branch's repository to run all commands&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout head branch&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="c1"&gt;# Checkout only the file from the base branch into a new directory named `base`&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout file from base branch&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.base_ref }}&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;base&lt;/span&gt;
          &lt;span class="na"&gt;sparse-checkout-cone-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
          &lt;span class="c1"&gt;# submodules &amp;amp; sparse-checkout allow checking out only a portion of the repository into another directory!&lt;/span&gt;
          &lt;span class="na"&gt;submodules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;sparse-checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;.env.enc&lt;/span&gt;

      &lt;span class="c1"&gt;# Run a clean install of the repository&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;

      &lt;span class="c1"&gt;# This is a custom JS script that can diff the files and output the results&lt;/span&gt;
      &lt;span class="c1"&gt;# It will be described below in more detail!&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run diff-env-files&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;produce-diff&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;BASE_DOTENVENC_FILE_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./base/.env.enc&lt;/span&gt;
          &lt;span class="na"&gt;CURRENT_DOTENVENC_FILE_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./.env.enc&lt;/span&gt;
          &lt;span class="na"&gt;DOTENVENC_PASS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOTENVENC_PASS }}&lt;/span&gt; &lt;span class="c1"&gt;#Needed for decryption by dotenvenc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  The second job: &lt;strong&gt;post-or-edit-comment&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The second job will use an action that can post comments as a bot to the pull request using the action from &lt;a href="https://github.com/peter-evans/create-or-update-comment" rel="noopener noreferrer"&gt;peter-evans/create-or-update-comment&lt;/a&gt;. For this job, we will be &lt;strong&gt;adding or amending a single comment only&lt;/strong&gt;, so if more changes are pushed to the pull request, the original comment will be updated instead of a new one published. It will wait on the first job to finish, and then use its &lt;code&gt;message&lt;/code&gt; output to use in the comment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;post-or-edit-comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="c1"&gt;# These permissions are necessary for the `create-or-update-comment` action to work!&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;issues&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
      &lt;span class="na"&gt;pull-requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;get-file-differences&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Find Comment&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peter-evans/find-comment@v3&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;find-comment&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;issue-number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.number }}&lt;/span&gt;
          &lt;span class="na"&gt;comment-author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;github-actions[bot]'&lt;/span&gt;
          &lt;span class="na"&gt;body-includes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Updates&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;".env"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;file&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;this&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;pull&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;request'&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add or replace comment with newest file differences&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peter-evans/create-or-update-comment@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;comment-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.find-comment.outputs.comment-id }}&lt;/span&gt;
          &lt;span class="na"&gt;issue-number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.number }}&lt;/span&gt;
          &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;# Updates to ".env" file in this pull request&lt;/span&gt;

            &lt;span class="s"&gt;${{ needs.get-file-differences.outputs.message }}&lt;/span&gt;
          &lt;span class="na"&gt;edit-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Here is the complete YAML file:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# ./github/workflows/diff-env-files.yml&lt;/span&gt;

&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Display differences between ".env.enc" files&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;This annotates the current pull request with differences between the ".env.enc" files in the base and head branches.&lt;/span&gt;
  &lt;span class="s"&gt;It output the key names that have been added, removed, and modified on the head branch as a comment on the branch.&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.env.enc'&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;get-file-differences&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.produce-diff.outputs.message }}&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout head branch&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout file from base branch&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.base_ref }}&lt;/span&gt;
          &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;base&lt;/span&gt;
          &lt;span class="na"&gt;sparse-checkout-cone-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
          &lt;span class="na"&gt;submodules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;sparse-checkout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;.env.enc&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run diff-env-files&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;produce-diff&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;BASE_ENV_ENC_FILE_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./base/.env.enc&lt;/span&gt;
          &lt;span class="na"&gt;CURRENT_ENV_ENC_FILE_PATH&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./.env.enc&lt;/span&gt;
          &lt;span class="na"&gt;DOTENVENC_PASS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOTENVENC_PASS }}&lt;/span&gt;

  &lt;span class="na"&gt;post-or-edit-comment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;issues&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
      &lt;span class="na"&gt;pull-requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;get-file-differences&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Find Comment&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peter-evans/find-comment@v3&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;find-comment&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;issue-number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.number }}&lt;/span&gt;
          &lt;span class="na"&gt;comment-author&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;github-actions[bot]'&lt;/span&gt;
          &lt;span class="na"&gt;body-includes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Updates&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;".env"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;file&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;in&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;this&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;pull&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;request'&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add or replace comment with newest file differences&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;peter-evans/create-or-update-comment@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;comment-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.find-comment.outputs.comment-id }}&lt;/span&gt;
          &lt;span class="na"&gt;issue-number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ github.event.pull_request.number }}&lt;/span&gt;
          &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
            &lt;span class="s"&gt;# Updates to ".env" file in this pull request&lt;/span&gt;

            &lt;span class="s"&gt;${{ needs.get-file-differences.outputs.message }}&lt;/span&gt;
          &lt;span class="na"&gt;edit-mode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;replace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Set up the custom JavaScript command for &lt;code&gt;diff-env-files&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Warning: because this command handles these files in plaintext, take extra caution to not log any information you do not want displayed in the pull request!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With workflow written, let's get the JavaScript command in place.  It will handle decrypting the files, diffing them for Additions, Deletions, and Modifications, and setting outputs to the GitHub Actions step. The outputs are a markdown message used for posting the comment, and the JSON file created of the diffs.&lt;/p&gt;

&lt;p&gt;Notes about the packages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/tka85/dotenvenc" rel="noopener noreferrer"&gt;dotenvenc&lt;/a&gt; package will handle encryption and decryption of &lt;code&gt;.env&lt;/code&gt; files in this example.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/actions/toolkit/tree/main/packages/core" rel="noopener noreferrer"&gt;@actions/core&lt;/a&gt; contains GHA workflow helpers for JavaScript! 
Here is an example of the script:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;path&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dotenvenc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tka85/dotenvenc&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;core&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@actions/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;performDiff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseBranchFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getKeyDifferences&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseKeys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentKeys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseKey&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;baseKeys&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;currentKeys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseKey&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;removed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getKeyDifferences&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;added&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;getKeyDifferences&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;modified&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseKeys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentKeys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseKey&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;baseKeys&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentKeys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseKey&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;base&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;baseKey&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;baseKey&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseKey&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;Added&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;added&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseBranchFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="na"&gt;Removed&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;removed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseBranchFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="na"&gt;Modified&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;modified&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseBranchFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="cm"&gt;/**
 * Writes the diffs of each key in a readable GitHub Markdown Format
 * @example Changes for each diff type
 * ## Added
 * - MY_API_KEY
 * - CUSTOMER_KEY
 * ## Removed
 * - MY_PASSWORD
 * ## Modified
 * - MY_API_URL
 *
 * @example Changes for additions only
 * ## Added
 * - MY_API_KEY
 * - CUSTOMER_KEY
 */&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getAsMarkdown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;items&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`- &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(([&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;vals&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="c1"&gt;//Only build if there are values for the key.&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;vals&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;`## &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;createList&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;vals&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;flat&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;//base-ref (target branch) file&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;baseBranchFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;dotenvenc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decrypt&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;encryptedFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;BASE_DOTENVENC_FILE_PATH&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;
    &lt;span class="c1"&gt;//head-ref (source branch) file&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;dotenvenc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decrypt&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;encryptedFile&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CURRENT_DOTENVENC_FILE_PATH&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;

    &lt;span class="c1"&gt;//Get diffs&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;performDiff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseBranchFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hasDiffs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;some&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;d&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;d&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;//Add outputs to GitHub Actions workflow&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;hasDiffs&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt;
        &lt;span class="nf"&gt;getAsMarkdown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;No differences exist between the files.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="nx"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;diffs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;core&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three environment variables are needed, which can be passed in directly from the workflow: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;BASE_ENV_ENC_FILE_PATH&lt;/code&gt;: the path to the base branch's encrypted file. This is set in the workflow as &lt;code&gt;./base/.env.enc&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;CURRENT_ENV_ENC_FILE_PATH&lt;/code&gt;: the path to the current branch's encrypted file. This is set in the workflow as &lt;code&gt;./.env.enc&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DOTENVENC_PASS&lt;/code&gt;: this is used to decrypt the files, as required by the package &lt;a href="https://github.com/tka85/dotenvenc" rel="noopener noreferrer"&gt;dotenvenc&lt;/a&gt;. In my demo repository, I have it set as a repo secret. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When this script is run, no output will be displayed to the user; instead, outputs are set directly for the GitHub Actions workflow to consume.&lt;/p&gt;

&lt;p&gt;Finally, we should add it to our &lt;code&gt;package.json&lt;/code&gt; for ease-of-use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"encrypt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./node_modules/.bin/dotenvenc -e"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"decrypt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./node_modules/.bin/dotenvenc -d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"diff-env-files"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"node diff-env-files.js"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: This is just an example -- you could separate the decryption and diff processes if needed. The main portion is just comparing a flat JSON file structure!&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Bonus: Hiding sensitive keys
&lt;/h4&gt;

&lt;p&gt;Let's assume you have a list of keys that, no matter if they are edited, you cannot display the key name in the output. No worries! We can edit the &lt;code&gt;diffs&lt;/code&gt; before setting it to the output using a replacer function, and use a comma-delimited list of keys to hide provided to the &lt;code&gt;process.env&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the JavaScript command file, let's add this function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hideKeys&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;sensitiveKeysChanged&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="c1"&gt;//@example: DB_PASSWORD,SECRET_TOKEN&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SENSITIVE_KEYS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DOTENV_SENSITIVE_KEYS&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;entries&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(([&lt;/span&gt;&lt;span class="nx"&gt;section&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;SENSITIVE_KEYS&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;sensitiveKeysChanged&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;keyIndex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;section&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;indexOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;section&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;splice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;keyIndex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;section&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;section&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;sensitiveKeysChanged&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Other sensitive keys changed&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sensitiveKeysChanged&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; key(s)`&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, in the &lt;code&gt;main&lt;/code&gt; function, we can pipe the JSON from &lt;code&gt;performDiffs&lt;/code&gt; to have additional keys hidden:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;diffs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;hideKeys&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;performDiff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;baseBranchFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;currentBranchFile&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the &lt;code&gt;diffs&lt;/code&gt; output will look something like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;::set-output name=diffs::{"Modified":["MY_API_URL"],"Other sensitive keys changed":["2 key(s)"]}&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;These keys can be supplied as a repository secret to the command as well, so the keys won't ever need to be displayed!&lt;/p&gt;

&lt;h2&gt;
  
  
  The workflow in action
&lt;/h2&gt;

&lt;p&gt;Here is an example of a pull request that edits the file and executes the workflow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvtz55m4zlvx496kpiix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvtz55m4zlvx496kpiix.png" alt="An example of a pull request with a comment that displays changes to key names within environment files" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is another pull request that does &lt;em&gt;not&lt;/em&gt; edit the file, so the workflow is not run!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4nzpbtrnbbz243xv2wu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4nzpbtrnbbz243xv2wu.png" alt="An example of a pull request with no actions run against it" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions can run workflows against changed filesets in a pull request. &lt;a href="https://docs.github.com/en/actions/writing-workflows/workflow-syntax-for-github-actions#onpushpull_requestpull_request_targetpathspaths-ignore" rel="noopener noreferrer"&gt;See here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;We can compare encrypted files by decrypting them in a workflow, using a JS script to diff them, and then output a list of Additions, Deletions, and Modifications as a comment on the pull request. Additional sensitive data to hide can be specified using repository secrets!&lt;/li&gt;
&lt;li&gt;This can work for other files, like flat JSON file structures, with some modifications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/hammzj/gha-file-diff-demo" rel="noopener noreferrer"&gt;Here is a demo repository!&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>githubactions</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Load balancing Cypress tests without Cypress Cloud</title>
      <dc:creator>Zachary Hamm</dc:creator>
      <pubDate>Fri, 28 Feb 2025 20:41:59 +0000</pubDate>
      <link>https://dev.to/hammzj/load-balancing-cypress-tests-without-cypress-cloud-2one</link>
      <guid>https://dev.to/hammzj/load-balancing-cypress-tests-without-cypress-cloud-2one</guid>
      <description>&lt;p&gt;Recently I've been asked to work on a solution of efficiently running Cypress component tests on pull requests without taking a lot of time. At first, my standing solution was to just evenly spread out the files against a number of parallel jobs on GitHub Actions workflows, but there is a big discrepancy between the slowest job and the average job times. Thus, we've been wondering if there is a smarter way of evening out the runtimes.&lt;/p&gt;

&lt;p&gt;With that, I created a new plugin of &lt;a href="https://github.com/hammzj/cypress-load-balancer" rel="noopener noreferrer"&gt;cypress-load-balancer&lt;/a&gt;, which allows us to solve that problem. This plugin saves the durations of the tests it runs and calculates an average, which then then can be passed into a script; that script uses an algorithm to perform load balancing for a number of job runners. &lt;/p&gt;

&lt;h2&gt;
  
  
  What is a load balancer?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Load_balancing_(computing)" rel="noopener noreferrer"&gt;Wikipedia's summary&lt;/a&gt; is as such:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In computing, load balancing is the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The general approach of using a load balancer for tests
&lt;/h2&gt;

&lt;p&gt;This is the basic idea of steps that need to occur to utilize results from load balancing properly. A persistent load balancing map file known as &lt;code&gt;spec-map.json&lt;/code&gt; is saved on the host machine. The load balancer will reference that file and perform calculations to assign tests across a given number of runners.  After all parallel test jobs complete, they will create a key-value list of test file names to their execution time; these results can then be merged back to the main spec map file, recalculate a new average duration per each test file, and then overwrite the original file on the host machine. Then the spec map can be consumed on the next test runs, and run through this process all over and over again.&lt;/p&gt;

&lt;p&gt;For this tool, here are the general steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install and configure the plugin in the Cypress config.&lt;/strong&gt; When Cypress runs, it will be able to locally save the results of the spec executions per each runner, depending on &lt;code&gt;e2e&lt;/code&gt; or &lt;code&gt;component&lt;/code&gt; tests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Initialize the load balancer main map file in a persisted location that can be easily restored from cache.&lt;/strong&gt; This
means the main file needs to be in a place outside of the parallelized jobs to can be referenced &lt;em&gt;by&lt;/em&gt; the
parallelized jobs in order to save new results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execute the load balancer against a number of runners.&lt;/strong&gt; The output is able to be used for all parallelized jobs to
instruct them which specs to execute.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Execute each parallelized job that starts the Cypress testrunner with the list of spec files to run across each runner.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When the parallelized jobs complete, collect and save the output of the load balancing files from each job in a temporary location.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After all parallelized test jobs complete, merge their load balancing map results back to the persisted map file
and cached for later usage.&lt;/strong&gt; This is where the persisted file on the host machine gets overwritten with new results to better perform on the next runs. &lt;em&gt;(In a GitHub Actions run, this means on pull request merge, the load balancing files from the base branch and the head branch need to be merged, then cached down to the base branch.)&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, for Docker Compose, a persistent volume needs to exist for the host &lt;code&gt;spec-map.json&lt;/code&gt; to be saved. It can then run the load balancing script, and execute a number of parallelized containers to run those separated Cypress tests. When each test job completes, the duration of each test can be merged back to the original file and re-calculate a new average.&lt;/p&gt;

&lt;p&gt;For GitHub Actions, it's a bit more complex. More on that later.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work for Cypress automated tests?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/hammzj/cypress-load-balancer?tab=readme-ov-file#setup" rel="noopener noreferrer"&gt;current installation guide&lt;/a&gt; as of February 2025 is as such:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Install the package to your project:&lt;/p&gt;


&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--save-dev&lt;/span&gt; cypress-load-balancer
yarn add &lt;span class="nt"&gt;-D&lt;/span&gt; cypress-load-balancer
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Add the following to your &lt;code&gt;.gitignore&lt;/code&gt; and other ignore files:&lt;/p&gt;


&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.cypress_load_balancer
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;In your Cypress configuration file, add the plugin separately to your &lt;code&gt;e2e&lt;/code&gt; configuration and also &lt;code&gt;component&lt;/code&gt;&lt;br&gt;
configuration, if you have one.&lt;br&gt;
This will register load balancing for separate testing types&lt;/p&gt;


&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;addCypressLoadBalancerPlugin&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cypress-load-balancer&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nf"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;e2e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setupNodeEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;addCypressLoadBalancerPlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;component&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setupNodeEvents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;addCypressLoadBalancerPlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Usage
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cypress tests are run for e2e or component testing types.&lt;/li&gt;
&lt;li&gt;When the run completes, the durations and averages of all executed tests are added to &lt;code&gt;spec-map.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;spec-map.json&lt;/code&gt; can now be used by the included executable, &lt;code&gt;cypress-load-balancer&lt;/code&gt;, to perform load balancing against the current Cypress configuration and tests that were executed. The tests are sorted from slowest to fastest and then assigned out per runner to get them as precise as possible to each other in terms of execution time. For example, with 3 runners and e2e tests:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;npx cypress-load-balancer --runners 3 --testing-type e2e&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The script will output an array of arrays of spec files balanced across 3 runners.&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scripts
&lt;/h3&gt;

&lt;p&gt;There are included scripts with &lt;code&gt;npx cypress-load-balancer&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
shell
$: npx cypress-load-balancer --help
cypress-load-balancer

Performs load balancing against a set of runners and Cypress specs

Commands:
  cypress-load-balancer             Performs load balancing against a set of
                                    runners and Cypress specs          [default]
  cypress-load-balancer initialize  Initializes the load balancing map file and
                                    directory.
  cypress-load-balancer merge       Merges load balancing map files together
                                    back to an original map.

Options:
      --version                Show version number                     [boolean]
  -r, --runners                The count of executable runners to use
                                                             [number] [required]
  -t, --testing-type           The testing type to use for load balancing
                               [string] [required] [choices: "e2e", "component"]
  -F, --files                  An array of file paths relative to the current
                               working directory to use for load balancing.
                               Overrides finding Cypress specs by configuration
                               file.
                               If left empty, it will utilize a Cypress
                               configuration file to find test files to use for
                               load balancing.
                               The Cypress configuration file is implied to
                               exist at the base of the directory unless set by
                               "process.env.CYPRESS_CONFIG_FILE"
                                                           [array] [default: []]
      --format, --fm           Transforms the output of the runner jobs into
                               various formats.
                               "--transform spec": Converts the output of the
                               load balancer to be as an array of "--spec
                               {file}" formats
                               "--transform string": Spec files per runner are
                               joined with a comma; example:
                               "tests/spec.a.ts,tests/spec.b.ts"
                               "--transform newline": Spec files per runner are
                               joined with a newline; example:
                                "tests/spec.a.ts
                               tests/spec.b.ts"
                                          [choices: "spec", "string", "newline"]
      --set-gha-output, --gha  Sets the output to the GitHub Actions step output
                               as "cypressLoadBalancerSpecs"           [boolean]
  -h, --help                   Show help                               [boolean]

Examples:
  Load balancing for 6 runners against      cypressLoadBalancer -r 6 -t
  "component" testing with implied Cypress  component
  configuration of `./cypress.config.js`
  Load balancing for 3 runners against      cypressLoadBalancer -r 3 -t e2e -F
  "e2e" testing with specified file paths   cypress/e2e/foo.cy.js
                                            cypress/e2e/bar.cy.js
                                            cypress/e2e/wee.cy.js


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Example on GitHub Actions
&lt;/h2&gt;

&lt;p&gt;I included two workflows in the package that show how this can work for tests executed on pull requests. &lt;/p&gt;

&lt;p&gt;Generally, here is what occurs:&lt;/p&gt;
&lt;h3&gt;
  
  
  Running tests on pull requests
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;get-specs:

&lt;ul&gt;
&lt;li&gt;A cached load balancing map is attempted to be restored. It tries for the current target branch, then for the source branch, and if none can be found, it initializes a basic map of the files to be run.&lt;/li&gt;
&lt;li&gt;Load balancing is performed based on the user's input of the number of jobs to use. It outputs an array of specs for each runner.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;cypress_run_e2e:

&lt;ul&gt;
&lt;li&gt;These are the parallelized jobs that run a subset of the files obtained from the load balancer output.&lt;/li&gt;
&lt;li&gt;When this job completes, it produces a temporary &lt;code&gt;spec-map.json&lt;/code&gt; of just those files, and uploads the artifact.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;merge_cypress_load_balancing_maps:

&lt;ul&gt;
&lt;li&gt;After all parallel jobs complete, download their artifacts of their temporary &lt;code&gt;spec-map.json&lt;/code&gt; files, merge them to the branch's map file, and then cache &lt;strong&gt;and&lt;/strong&gt; upload it. This is how it can be saved per branch.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
yml
name: Testing load balancing Cypress E2E tests

on:
  pull_request:
  workflow_dispatch:
    inputs:
      runners:
        type: number
        description: Number of runners to use for parallelization
        required: false
        default: 3
      debug:
        type: boolean
        description: Enables debugging on the job and on the cypress-load-balancer script.

env:
  runners: ${{ inputs.runners || 3}}
  CYPRESS_LOAD_BALANCER_DEBUG: ${{ inputs.debug || false }}

jobs:
  get_specs:
    runs-on: ubuntu-22.04
    outputs:
      e2e_specs: ${{ steps.e2e-cypress-load-balancer.outputs.cypressLoadBalancerSpecs }}
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}

      - run: |
          yarn install
          yarn build

      - name: Get cached load-balancing map
        id: cache-restore-load-balancing-map
        uses: actions/cache/restore@v4
        with:
          fail-on-cache-miss: false
          path: .cypress_load_balancer/spec-map.json
          key: cypress-load-balancer-map-${{ github.head_ref || github.ref_name }}-${{ github.run_id }}-${{ github.run_attempt }}
          # Restore keys:
          ## 1. Same key from previous workflow run
          ## 2. Key from pull request base branch most recent workflow. Used for the "base" map, if one exists
          restore-keys: |
            cypress-load-balancer-map-${{github.head_ref || github.ref_name }}-${{ github.run_id }}-${{ github.run_attempt }}
            cypress-load-balancer-map-${{github.head_ref || github.ref_name }}-${{ github.run_id }}-
            cypress-load-balancer-map-${{github.head_ref || github.ref_name }}-
            cypress-load-balancer-map-${{ github.base_ref }}-

      - name: Perform load balancing for E2E tests
        id: e2e-cypress-load-balancer
        #TODO: this can eventually be replaced with a GitHub action. The executable should be used for Docker and other CI/CD tools
        run: npx cypress-load-balancer -r ${{ env.runners }} -t e2e --fm string --gha
        #run: echo "specs=$(echo $(npx cypress-load-balancer -r ${{ env.runners }} -t e2e --fm string | tail -1))" &amp;gt;&amp;gt; $GITHUB_OUTPUT

      - name: "DEBUG: read restored cached spec-map.json file"
        if: ${{ env.CYPRESS_LOAD_BALANCER_DEBUG == 'true' }}
        run: cat .cypress_load_balancer/spec-map.json

  cypress_run_e2e:
    runs-on: ubuntu-22.04
    needs: get_specs
    strategy:
      fail-fast: false
      matrix:
        spec: ${{ fromJson(needs.get_specs.outputs.e2e_specs) }}
    steps:
      - name: Generate uuid to use uploading a unique load balancer map artifact
        id: generate-uuid
        run: echo uuid="$(uuidgen)" &amp;gt;&amp;gt; $GITHUB_OUTPUT

      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}

      - name: Cypress run e2e tests
        uses: cypress-io/github-action@v6
        with:
          browser: electron
          build: yarn build
          spec: ${{ matrix.spec }}
          # Fix for https://github.com/cypress-io/github-action/issues/480
          config: videosFolder=/tmp/cypress-videos

      - name: Upload temp load balancer map
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: ${{steps.generate-uuid.outputs.uuid }}-cypress-load-balancer-map-temp-from-parallel-job
          path: .cypress_load_balancer/spec-map.json

  merge_cypress_load_balancing_maps:
    runs-on: ubuntu-22.04
    needs: [get_specs, cypress_run_e2e]
    if: ${{ needs.get_specs.result == 'success' }}
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}

      - run: |
          yarn install
          yarn build

      - name: Get cached load-balancing map
        id: cache-restore-load-balancing-map
        uses: actions/cache/restore@v4
        with:
          fail-on-cache-miss: false
          path: .cypress_load_balancer/spec-map.json
          key: cypress-load-balancer-map-${{ github.head_ref || github.ref_name }}-${{ github.run_id }}-${{ github.run_attempt }}
          # Restore keys:
          ## 1. Same key from previous workflow run
          ## 2. Key from pull request base branch most recent workflow
          restore-keys: |
            cypress-load-balancer-map-${{github.head_ref || github.ref_name }}-${{ github.run_id }}-${{ github.run_attempt }}
            cypress-load-balancer-map-${{github.head_ref || github.ref_name }}-${{ github.run_id }}-
            cypress-load-balancer-map-${{github.head_ref || github.ref_name }}-
            cypress-load-balancer-map-${{ github.base_ref }}-

      - name: If no map exists for either the base branch or the current branch, then initialize one
        id: initialize-map
        run: npx cypress-load-balancer initialize
        if: ${{ hashFiles('.cypress_load_balancer/spec-map.json') == '' }}

      - name: Download temp maps
        uses: actions/download-artifact@v4
        with:
          pattern: "*-cypress-load-balancer-map-temp-from-parallel-job"
          path: ./cypress_load_balancer/temp
          merge-multiple: false

      - name: Merge files
        run: npx cypress-load-balancer merge -G "./cypress_load_balancer/temp/**/spec-map.json"

      - name: Save overwritten cached load-balancing map
        id: cache-save-load-balancing-map
        uses: actions/cache/save@v4
        with:
          #This saves to the workflow run. To save to the base branch during pull requests, this needs to be uploaded on merge using a separate action
          # @see `./save-map-on-to-base-branch-on-pr-merge.yml`
          key: cypress-load-balancer-map-${{ github.head_ref || github.ref_name }}-${{ github.run_id }}-${{ github.run_attempt }}

          path: .cypress_load_balancer/spec-map.json
      # This is to get around the issue of not being able to access cache on the base_ref for a PR.
      # We can use this to download it in another workflow run: https://github.com/dawidd6/action-download-artifact
      # That way, we can merge the source (head) branch's load balancer map to the target (base) branch.
      - name: Upload main load balancer map
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: cypress-load-balancer-map
          path: .cypress_load_balancer/spec-map.json

      - name: "DEBUG: read merged spec-map.json file"
        if: ${{ env.CYPRESS_LOAD_BALANCER_DEBUG == 'true' }}
        run: cat .cypress_load_balancer/spec-map.json


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Merging back on pull requests
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;When the pull request is merged, the newest map uploaded from the source branch's testing workflow is downloaded, merged with the base branch's map, and then cached to the base branch. This allows it to be reused on new pull requests to that branch.&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
yml
# See https://github.com/brennerm/github-actions-pr-close-showcase/
name: Save load balancing map from head branch to base branch on pull request merge
on:
  pull_request:
    types: [closed]

jobs:
  save:
    # this job will only run if the PR has been merged
    if: github.event.pull_request.merged == true
    runs-on: ubuntu-latest
    steps:
      - run: |
          echo PR #${{ github.event.number }} has been merged

      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ env.NODE_VERSION }}

      - run: |
          yarn install
          yarn build

      - name: Download load-balancing map from head branch using "cross-workflow" tooling
        id: download-load-balancing-map-head-branch
        uses: dawidd6/action-download-artifact@v8
        with:
          workflow: cypress-parallel.yml
          # Optional, will get head commit SHA
          pr: ${{ github.event.pull_request.number }}
          name: cypress-load-balancer-map
          path: .cypress_load_balancer

      - name: Restore cached load-balancing map on base branch
        id: cache-restore-load-balancing-map-base-branch
        uses: actions/cache/restore@v4
        with:
          fail-on-cache-miss: false
          path: /temp/.cypress_load_balancer/spec-map.json
          key: cypress-load-balancer-map-${{ github.base_ref }}-${{ github.run_id }}-${{ github.run_attempt }}
          restore-keys: |
            cypress-load-balancer-map-${{ github.base_ref }}-

      - name: Merge files
        run: npx cypress-load-balancer merge -G "./temp/.cypress_load_balancer/spec-map.json"

      - name: Save merged load-balancing map
        uses: actions/cache/save@v4
        with:
          path: .cypress_load_balancer/spec-map.json
          key: cypress-load-balancer-map-${{ github.base_ref }}-${{ github.run_id }}-${{ github.run_attempt }}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;And that's it! This is probably a very niche example, but the general approach should be the same:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save a spec map on the host machine&lt;/li&gt;
&lt;li&gt;Perform load balancing against the spec map&lt;/li&gt;
&lt;li&gt;Run parallel test jobs organized by the list of files separated by the load balancer&lt;/li&gt;
&lt;li&gt;Collect their results &lt;/li&gt;
&lt;li&gt;Merge those results back to the host map and recalculate the average&lt;/li&gt;
&lt;li&gt;Repeat!&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cypress</category>
      <category>automation</category>
      <category>testing</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Misconceptions with testing paradigms and how to address them</title>
      <dc:creator>Zachary Hamm</dc:creator>
      <pubDate>Thu, 05 Sep 2024 15:48:19 +0000</pubDate>
      <link>https://dev.to/hammzj/misconceptions-with-testing-paradigms-and-how-to-address-them-2kok</link>
      <guid>https://dev.to/hammzj/misconceptions-with-testing-paradigms-and-how-to-address-them-2kok</guid>
      <description>&lt;p&gt;The realm of quality is a much misunderstood one and conclusions are drawn without understanding its ability to protect a product. It should not be sacrificed in order to meet deadlines or bring that product to market -- what if it work as intended? Many of us have seen bugs appear after a release, and probably admit that there wasn't enough time to test for it. Testing is a major part of a quality plan and the workloads of many, so being able to explain why it is necessary could very well alter the opinion in favor of it. &lt;/p&gt;

&lt;p&gt;That is why I want to present some possibly controversial statements on ideas around testing. Some of these points are not the first thing that comes to mind thinking of how testing is performed but are incredibly important to understand. I hope this article can bring you insight when the hard questions are being asked, and put the power back in the quality process.&lt;/p&gt;

&lt;p&gt;It is separated into three parts.&lt;/p&gt;




&lt;p&gt;The first part deals with &lt;em&gt;how&lt;/em&gt; to run tests and &lt;em&gt;what&lt;/em&gt; they are. &lt;/p&gt;

&lt;h3&gt;
  
  
  Testing is mainly used as simply checking for a pass/fail state, but it is just one part of a larger method.
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Application testing is an alert mechanism for the rate of change between the system and a set of defined actions performed upon it. It is but one part of the &lt;a href="https://www.amnh.org/explore/videos/the-scientific-process" rel="noopener noreferrer"&gt;scientific method&lt;/a&gt;: most kinds of software testing, however, begin with expectations rather than predictions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Testing in itself, is just an experiment, where it executes a trial to determine if a prediction will occur based on earlier observations. It is but just one part of the &lt;a href="https://www.amnh.org/explore/videos/the-scientific-process" rel="noopener noreferrer"&gt;scientific method&lt;/a&gt; (&lt;a href="https://en.wikipedia.org/wiki/Scientific_method" rel="noopener noreferrer"&gt;additional Wikipedia article&lt;/a&gt;). In software engineering, a &lt;strong&gt;check&lt;/strong&gt;, or assertion, is code that ensures if certain effects from an action meet given criteria. It is usually completed with a pass or fail state. Questions are answered in the format of,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Under&lt;/em&gt; &lt;em&gt;X&lt;/em&gt; &lt;em&gt;circumstances, does the set of&lt;/em&gt; &lt;em&gt;Y&lt;/em&gt; &lt;em&gt;actions cause&lt;/em&gt; &lt;em&gt;Z&lt;/em&gt; &lt;em&gt;outcome to occur?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The benefit of these answers define a time when our predictions are met successfully. If, at a later point, a change causes those checks to fail expectations, we are alerted that a change caused an unhandled event In other words, &lt;strong&gt;a test demonstrates the rate of change between when a system's observed behavior did or did not expectations under certain conditions,&lt;/strong&gt; as compared to measuring any possible outcomes that can occur.&lt;/p&gt;

&lt;p&gt;Making the conclusion that a failing test found an issue in the application is inaccurate. There could very well be a bug, but there could also be an issue with the test not accounting for new change. A failing test means that the path it expected was not followed. That is why viewing them as an alert mechanism to an unhandled change is more accurate.&lt;/p&gt;

&lt;p&gt;Take for example comparing a trial involving a chemical reaction with software &lt;strong&gt;functional&lt;/strong&gt; testing. The early phases of a chemical trial may involve recording outcomes without requiring explicit behaviors. What is desired would be defined later on, &lt;em&gt;based on those observations.&lt;/em&gt; Building an app, however, may have requirements defined at the earliest phases of development. Many types of software testing need those checks to be in place early since they begin with a relation to an known behavior, like a product requirement.&lt;/p&gt;

&lt;p&gt;This is all for asserting upon &lt;em&gt;knowable&lt;/em&gt; items. There are other categories that are named as "testing," but their lifecycle begins much earlier. They initially pose questions that are open-ended and outcome-oriented. Performance testing measures the ability of a system to perform under duress so it can ask questions like, &lt;em&gt;“can it process data in&lt;/em&gt; &lt;em&gt;X&lt;/em&gt; &lt;em&gt;time when it is stressed?”&lt;/em&gt; Chaos testing helps examine the stability of a system to ask, &lt;em&gt;“Will the system remain available if&lt;/em&gt; &lt;em&gt;X&lt;/em&gt; &lt;em&gt;happens to it?”&lt;/em&gt; Frontend accessibility testing observes behaviors to predict if the application can be fully utilized in a non-standard format.&lt;/p&gt;

&lt;p&gt;Each of these categories demonstrate that testing is not made for just the present. When a test is added, it is made to be repeatable. If the system is behaving today, how can we know that it will behave tomorrow?&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing a feature is an approximation of a situation.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; There are many ways to reach an outcome with some being closer to how an actual situation could be observed. Therefore, one must estimate efficiently how to cause and detect it. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For example, imagine checking a theme switcher button in a web app. Our test asks, &lt;em&gt;"Did clicking the button once change the theme?"&lt;/em&gt; There are many ways to answer this depending on which actions we take and upon what we assert. Even at a functional level, we could try&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clicking the button&lt;/li&gt;
&lt;li&gt;performing a &lt;code&gt;touch&lt;/code&gt; action for a mobile device&lt;/li&gt;
&lt;li&gt;invoking the &lt;code&gt;click&lt;/code&gt; listener directly&lt;/li&gt;
&lt;li&gt;forcing the emit of the &lt;code&gt;click&lt;/code&gt; event&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since the underlying &lt;strong&gt;click&lt;/strong&gt; actions were originally tested within their source code, we will assume that a &lt;code&gt;click&lt;/code&gt; is the easiest path. The other ways are also correct, however. What changes is how much we "wrap" around the thing we want to test -- the more “wrapping code” we remove, the less we rely on other functionality that also needs to be correct in order to observe the effects from the true "source" of our test.&lt;/p&gt;

&lt;p&gt;It can be even complicated when writing assertions, because they map our predictions as an answer to our question. We could check for&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a data ID on an element that denotes "light" or "dark" theme&lt;/li&gt;
&lt;li&gt;the color of an element that is affected by theme&lt;/li&gt;
&lt;li&gt;stubbing the click listener to test if the method is called&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In feature testing, these assertions are much less direct because we attempt to map them as answers to a business requirement. A requirement might state that an element's color shall be light gray or dark gray based on the mode, and another might state that a &lt;code&gt;Theme&lt;/code&gt; element is updated when a button is clicked. We would need to answer the question with a test that best represents the user interaction.&lt;/p&gt;

&lt;p&gt;As scenarios become more complex and integrated in highly-connected environments, the paths towards asserting an outcome become much more variable. Therefore, an engineer should the most direct way that evaluates the scenario in the context of its environment. In this case, clicking the button and checking if the color changed is probably the easiest path -- but is it the most accurate and stable? Or does another path provide better insight? &lt;/p&gt;

&lt;h2&gt;
  
  
  A test is arbitrary unless sufficiently described.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; The title of a test needs to succinctly describe its actions or else it loses value.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Test code, like application code, is essentially grouped functions that execute a static flow. It runs our entire trial from start to finish. Yet, unlike application code, these flows are situational. Remember from earlier that a test function can be used as an alert mechanism. It is imperative, then, that test code is described succinctly in terms of what is being &lt;strong&gt;checked&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unlike application code, test code needs to define its &lt;strong&gt;situation&lt;/strong&gt;, or scenario. Application code has established paradigms for which it can be separated into understandable and maintainable parts, meaning smaller functions,  which helps to explain each individual action with high specificity. However, because test functions are a culmination of a chain of actions, it can be difficult to ascertain what it &lt;em&gt;does&lt;/em&gt;. Its name explains its value. &lt;/p&gt;

&lt;p&gt;Naming a test function as &lt;code&gt;clickingTheSubmitButtonLogsInAnExistingUser&lt;/code&gt; rather than &lt;code&gt;clickSubmitTest&lt;/code&gt; adds succinct definition. Extracting value from &lt;code&gt;clickSubmitTest&lt;/code&gt; is unknowable: its actions, its outcome, or anything past the notion that it is a test — it communicates little as compared to the first option. &lt;/p&gt;

&lt;p&gt;Luckily, there are some suggestions that help define test code. The Arrange-Act-Assert pattern explains how to arrange a test, and frameworks like JavaScript’s Jest or Mocha, Ruby’s RSpec, and Python’s Pytest allow for organizing tests into contexts. These contexts build a hierarchy of explaining each function at a human-readable level (assuming the tests are written to be, well, human-readable). BDD frameworks, like Gherkin, can describe both a scenario and its actions in human-readable format, and require a higher degree of using effective language.&lt;/p&gt;




&lt;p&gt;The second part here deals with &lt;em&gt;where&lt;/em&gt; tests are run. Everything deals with an environment is — for a test, it is the place that provides the conditions of its initial state — our “control” situation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Higher degrees of manageability is directly related to greater assumptions of reality.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Testing closer to the source code involves assumptions of a situation that may not match what actually happens.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Let’s start by stating that with more externalities comes more possibilities to assume. That means that testing closer to the source leads to less variability and more reliable tests. The highest degree of control is over what you can change &lt;strong&gt;directly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Moving up in environments, a test’s initial state becomes more variable. Each external item introduced is incorporated into the &lt;strong&gt;initial state&lt;/strong&gt; for our test. These include other systems and applications, but also the amount of users allowed to access them. Each user also has their own unique circumstances that form the conditions surrounding their own environment. When testing, we assume parts of the initial state to be accurate without actually knowing that they are. Even unit testing on one system may match a second system, since they are not the same environment themselves.&lt;/p&gt;

&lt;p&gt;Now, as external items are integrated, that degree of control is reduced and variability is heightened. Because we normally cannot directly control any external items outside of the application, we must make assumptions about their behaviors. User permissions, connectivity issues, bugs, and system availability limit our control over them. We may be able to indirectly influence them through injecting data and other flows that set them in a wanted state, but cannot force an outcome. If we can influence it into a wanted state, then we assume it works as expected, and thus, we regain some control over &lt;em&gt;our&lt;/em&gt; test. This is not always be possible, though.&lt;/p&gt;

&lt;p&gt;Finally, production is just another environment where our application exists. Because what matters is where a user interacts with it, a user’s environment is an extension from our understanding of it. That leads to a very crucial sub-point...&lt;/p&gt;

&lt;h3&gt;
  
  
  Sub-point: Production is &lt;em&gt;not&lt;/em&gt; reality.
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Reality is the situation in which an end user observed the behavior of the application. The only way to measure that situation is by approximating it. Production is just a name for a place that allows the application to be available to the general public, but is just a layer within the user's own environment.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;“Production” is just the name of an environment where we’ve agreed to release the application to the greatest set of users: the public. They could be either human users or other systems whose actions alter reality. However, production is only an abstraction of reality, because only the environment in which a user interacts with the application will produce outcomes that affect others. Production makes the assumptions that still allow for expected system behavior like any other environment below it, yet, does not &lt;em&gt;define&lt;/em&gt; the user’s environment. Our production application is influenced by everything within the user's environment.&lt;/p&gt;

&lt;p&gt;A medical drug could be effective for 99 percent of people, but cause side effects in that last percent. A drug produced does not cause effects until it is taken by a person. Compared to a software production environment, having a full test set may catch those "side effects" that happen for our one-percent of users. We make assumptions that the most reliable parts of the system (meaning, the parts with the most trusted tests) should work for any user.&lt;/p&gt;

&lt;p&gt;When a software bug comes in raised by a customer, then the only way to determine how it was caused is to approximate &lt;em&gt;how&lt;/em&gt; that person observed its occurrence. We can only ever approximate a scenario within an environment because we did not observe it originally. Thus, we &lt;em&gt;estimate&lt;/em&gt; as close as possible to what actually happened.&lt;/p&gt;

&lt;p&gt;For a web app, this means we attempt to account for the conditions that created the initial state of a user's observation: their browser type, network connectivity, timings, physical location, device hardware, etc. But again, production is not necessarily the &lt;em&gt;user’s environment&lt;/em&gt;. Production is an abstraction of one because it exists as only a single layer there. Thus, if a user encounters an issue, then that user experienced it in their own version of reality constructed with their own conditions, actions, timings, and assumptions. &lt;/p&gt;

&lt;p&gt;We may not be able to encounter the bug even when executing those same actions, unless their initial state can be mimicked. If not, then we did not make accurate assumptions about their environment. Being unable to recreate an issue does not make it less real, but instead means we cannot define its situation. &lt;/p&gt;

&lt;p&gt;Production is not the end result. It is where “public” users can experience it within the conditions of their environment. &lt;/p&gt;

&lt;h2&gt;
  
  
  Integrating external systems assumes a level of trust has been met.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Integrating a system involves trust that it acts as described by its maintainers. Testing is a commonly agreed method for which trust can be derived. Thus, utilizing a system has an implicit agreement where the engineer believes that a certain level of trust has been fulfilled.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If we involve an external system for which we do not directly test, then we agree that it works for our use cases. Each integration of our application to a higher environment means we need to account for a greater number of systems for which we cannot directly control. That also means we trust in the functionality of those systems because their expectations meet their descriptions.&lt;/p&gt;

&lt;p&gt;There are many ways trust can be derived, with some ways being more transparent than others: the developers of the application could say it works without making their code public, a third party can use it and give it a stamp of a approval, or I could use the application directly and check for how I want it to work. However, each of these ways includes executing trials, recorded or not. If no tests or trials are performed, then how can one predict what will occur in reality? Since test code must use application code to make a prediction, wouldn’t it be an independent entity from the application? &lt;/p&gt;

&lt;p&gt;If we agree that test code forms trust surrounding the functionality of the system, we also agree that the test code can form a basis of trust in itself. Its inherent purpose is to conduct trials and alert the results, so passing tests means that we expect its trials to be conducted properly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sub-point: Test code may be considered an independent entity from the application.
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Test execution code is considered as trustworthy and accurate when it performs its duties to ensure stability of a system — thus, when effective, it could be considered an an external system for which trust is derived from it in itself.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Being able to see the results of other engineers' tests builds trust since we agree there is no need for additional testing of that system. But we also did not write the test code, and running the test may not explain how it was conducted. What if they are inaccurate?&lt;/p&gt;

&lt;p&gt;Test code is implied to be correct and carries a degree of honesty with it. Application code itself does not validate its own functionality alone. To write tests that make correct predictions about that functionality means that the writers are trusted to be honest, since we may not see the actions taken in the test. The test code is a third party that has no bond with the application, and also exists as an external system itself because it both requires and affects the application. &lt;/p&gt;

&lt;p&gt;Unreliable tests, unaccounted situations, and misleading test titles lessen that bond. Undefined functionality and scenarios are understood to be hard to conduct, but core functionality as described by its maintainers &lt;strong&gt;must be thoroughly tested for well-defined situations.&lt;/strong&gt; Anything less can harm the integrity of the owners of the software. Writing tests to always pass, regardless of its trial, is deceptive behavior. But at that point, the external system is untrustworthy because it fails to explain the behavior of the application it supports.&lt;/p&gt;

&lt;p&gt;I always like to ask the question, “who tests the tests?” It's hard to do -- the chain of tests would grow forever. The test code has to be checked &lt;em&gt;itself&lt;/em&gt; to be correct. Unreliable functionality may not mean the test writer is dishonest, but it does mean that the application is difficult to trust. There may be real reasons why certain scenarios are missed — unable to be recreated, external systems misbehaving, or the application’s behavior not being well-defined enough to design a test trial. A test’s value is determined as a cost-relationship — how much value do its checks provide versus its complexity to maintain? It is subjective and hard to calculate.&lt;/p&gt;




&lt;p&gt;This third and final part deals with the &lt;em&gt;who&lt;/em&gt; and &lt;em&gt;when&lt;/em&gt; of running tests. Testing is just one part of quality, and knowing where it fits is of upmost importance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Everyone owns quality, but those who provide the product bear the ultimate responsibility.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; While testers perform the fullest duties of quality, it is something everyone should advocate and own. The ones who provide the product to others take the fullest responsibility of communicating that the product works as intended.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is the responsibility of everyone involved in the making of a product to ensure it works as described. Quality is owned by everyone. If quality must be restricted in order to release a product, it is not the fault of that department when faults are discovered. Those who provide the product to others must answer to any problems that arise.&lt;/p&gt;

&lt;p&gt;This goes for letting improper tests exist as well. If there is a tester who doesn’t produce accurate or trustworthy tests, then that person will be responsible for their own actions. That person harms the product and the organization. However, letting those tests remain in place to sign off on behaviors is the responsibility of those who allow them to exist there. Tests that are missed or unable to be completed for the sake of meeting deadlines also are placed in the realm of those that allowed them to be removed. &lt;/p&gt;

&lt;h2&gt;
  
  
  Measuring the usefulness of quality in the present is unrealized, but constant discovery can bring future stability.
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt; Tests need to have a specific focus, large test sets need to cover expected situations, and discoverability allows for constant attention to observing new behaviors. Alone, they are not enough, but together, they forge the path to ensuring high levels of application stability and sustainability. Testing does not complete -- is an ongoing effort.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Writing a test to pass for the present is great! We need to know it works. However, writing tests to make sure those checks are maintained upon future changes to an application is &lt;em&gt;necessary&lt;/em&gt;. Each change to the base application brings in uncertainty, much like adding external sources. If test code is meant to be trustworthy, a test producing a consistent and reliable check creates that trust. A failing test means an unexpected change was detected, and that it.&lt;/p&gt;

&lt;p&gt;Adding more tests means making extra predictions (or, business scenarios), but does not inherently creates reliable coverage. A combination of multiple test disciplines carries more trust than a permutation of a single type of test. There needs to be a focus placed — an intention defined — or we cannot understand &lt;strong&gt;what&lt;/strong&gt; we evaluate. &lt;/p&gt;

&lt;p&gt;However, a test’s cost is determined by how accurate it answers a prediction against its complexity to maintain.&lt;/p&gt;

&lt;p&gt;Designing scenarios for the present that are executed in the future are great, but become static — they only check for what we know already. They can never account for new behaviors unless new tests are created. Thus, testing is not static — it is ongoing as a source of discovering new scenarios, new behaviors, new outcomes. Just because it works as intended for &lt;em&gt;us&lt;/em&gt; does not mean it works as intended for &lt;em&gt;others&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;Choosing to skip testing for the present may mean deadlines are hit, but the effects of it will cost more over time. &lt;strong&gt;A bug found tomorrow is an issue not fixed today.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Quality is a combination of many facets, and testing is a major portion. Quality is always important as it does not end at the application’s release to “production.” It does not end with the user. &lt;strong&gt;It continues for as long as the application exists.&lt;/strong&gt; Therefore, &lt;strong&gt;always continue to test as if its users require greater trust.&lt;/strong&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Testing is mainly used as simply checking for a pass/fail state, but it is just one part of a larger method.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Summary&lt;/strong&gt;: Application testing is an alert mechanism for the rate of change between the system and a set of defined actions performed upon it. It is just one part of the &lt;a href="https://www.amnh.org/explore/videos/the-scientific-process" rel="noopener noreferrer"&gt;scientific method&lt;/a&gt;: most kinds of software testing, however, begin with expectations rather than predictions.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;A test is arbitrary unless sufficiently described.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; The title of a test needs to succinctly describe its actions or else it loses value.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Higher degrees of manageability is directly related to greater assumptions of reality.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; Testing closer to the source code involves assumptions of a situation that may not match what actually happens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sub-point:&lt;/strong&gt; Production is not reality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; Reality is the situation in which an end user observed the behavior of the application. The only way to measure that situation is by approximating it. Production is just a name for a place that allows the application to be available to the general public, but is just a layer within the user's own environment.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Integrating external systems assumes a level of trust has been met.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; Integrating a system involves trust that it acts as described by its maintainers. Testing is a commonly agreed method for which trust can be derived. Thus, utilizing a system has an implicit agreement where the engineer believes that a certain level of trust has been fulfilled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sub-point:&lt;/strong&gt; Test code may be considered an independent entity from the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; Test execution code is considered as trustworthy and accurate when it performs its duties to ensure stability of a system — thus, it could be considered an an external system for which trust is derived from it in itself.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Everyone owns quality, but those who provide the product bear the ultimate responsibility.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; While testers perform the fullest duties of quality, it is something everyone should advocate and own. The ones who provide the product to others take the fullest responsibility of communicating that the product works as intended.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Measuring the usefulness of quality in the present is unrealized, but constant discovery can bring future stability.&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Summary:&lt;/strong&gt; Tests need to have a specific focus, large test sets need to cover expected situations, and discoverability allows for constant attention to observing new behaviors. Alone, they are not enough, but together, they forge the path to ensuring high levels of application stability and sustainability. Testing does not complete -- is an ongoing effort.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>testing</category>
      <category>softwareengineering</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
