<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Steven Lemon</title>
    <description>The latest articles on DEV Community by Steven Lemon (@twynsicle).</description>
    <link>https://dev.to/twynsicle</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/twynsicle"/>
    <language>en</language>
    <item>
      <title>Why Is My Jest Test Suite So Slow?</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Wed, 11 Jan 2023 18:44:46 +0000</pubDate>
      <link>https://dev.to/twynsicle/why-is-my-jest-test-suite-so-slow-1od</link>
      <guid>https://dev.to/twynsicle/why-is-my-jest-test-suite-so-slow-1od</guid>
      <description>&lt;p&gt;Our team is a couple of months into developing a new application, and our suite of unit 240 tests takes 46 seconds to run. That duration is not excessive yet, but it’s increasing in proportion to the number of tests. In a couple of months, it’ll take a couple of minutes to run our tests.&lt;/p&gt;

&lt;p&gt;We were surprised by this, as Jest is known for its fast performance. However, while Jest reported that each test only took 40ms, the overall run time for each test was closer to 6 seconds.&lt;/p&gt;

&lt;p&gt;The integration tests for one of our legacy applications fare even worse, taking around 35 seconds for a single test. This time puts it over the duration where the mind starts to wander, and it’s hard to focus on developing the tests. With each actual test only taking about a second, where is all the extra time going?&lt;/p&gt;

&lt;p&gt;Over the past couple of weeks, I’ve fallen down a bit of a rabbit hole trying to figure out why our test suite is so slow. Unfortunately, there are a lot of ideas out there to sort through, and few of them had any impact. Further, there doesn’t even seem to be much of a consensus on how fast our tests should be.&lt;/p&gt;

&lt;p&gt;The outcome of this investigation was a reduction of the duration of our unit tests from 46 to 13 seconds. Our integration tests saw a similar improvement, with their duration falling from 35 to 15 seconds. Our pipelines saw even more significant improvements, which I cover in &lt;a href="https://javascript.plainenglish.io/optimizing-jest-for-faster-ci-performance-with-github-actions-f4d7100c86c5" rel="noopener noreferrer"&gt;this separate article.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, I want to share the improvements that made the biggest differences, as well as look at some of the possible misconfigurations and misusages of Jest undermining its performance.&lt;/p&gt;




&lt;p&gt;While the following example appears simple and like it should run really fast, it hides a surprising but very common configuration that will delay our tests significantly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// TestComponent.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@mui/material&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TestComponent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;Hello World!&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="nc"&gt;Button&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ComponentB.test.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;render&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;screen&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@testing-library/react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TestComponent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./TestComponent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;TestComponent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;TestComponent&lt;/span&gt; &lt;span class="p"&gt;/&amp;gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByText&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello World!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;toBeInTheDocument&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And when we run the test, we get the following result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;PASS src/components/testComponent/TestComponent.test.tsx
√ TestComponent - 1 &lt;span class="o"&gt;(&lt;/span&gt;34 ms&lt;span class="o"&gt;)&lt;/span&gt;
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Time: 3.497 s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Before we can start improving the runtime, we need to understand where Jest is spending its time. 34ms to run the test is reasonable, but it’s unclear where the other 3.463 seconds are going. Without understanding what Jest is doing, we risk wasting time trying to optimize the wrong thing. For example, a common suggestion is to improve TypeScript compilation time by switching out ts-jest or babel-jest for a faster compiler. However, because Jest makes heavy use of caching, the impact of TypeScript compilation after the first run is minimal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Jest startup time&lt;/strong&gt;&lt;br&gt;
When we start a test run, Jest needs to load itself and our test environment (typically jest-environment-jsdom). It builds a map of the dependencies between files, makes some decisions about test ordering, loads plugins, and spins up additional threads. All of this work takes about a second, but it’s entirely up to Jest and largely independent of our application, so there’s little we can do about it. Further, this setup happens once per thread, so it doesn’t scale up as the number of tests and test files increases.&lt;/p&gt;

&lt;p&gt;For anyone curious about what Jest is doing when it starts up, there is a &lt;a href="https://www.youtube.com/watch?v=3YDiloj8_d0" rel="noopener noreferrer"&gt;detailed video on the topic.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Populating the cache&lt;/strong&gt;&lt;br&gt;
The first time we run tests in our application, Jest will need to take a bit longer as it can’t take advantage of cached data. Jest spends the majority of the first time it runs transpiling TypeScript. After that initial run, there might be a handful of TypeScript files that need retranspiling, but otherwise, Jest primarily uses the cached values. The uncached scenario occurs infrequently and is not a significant factor in optimizing performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Loading the test file&lt;/strong&gt;&lt;br&gt;
Before Jest can run a test file, it needs to load or mock all of the dependencies referenced by the test file and setupTests.ts. This step can add substantial overhead to the test runtime and is where we can make significant gains in test performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Performance of the actual test&lt;/strong&gt;&lt;br&gt;
Here, our test took only 34ms, and there are few gains to be made in optimizing this further.&lt;/p&gt;



&lt;p&gt;Fortunately, we don’t need to guess how much time Jest is spending on each of the above. We can use Chrome’s DevTools to profile our test run and can discover what each run is doing.&lt;/p&gt;

&lt;p&gt;First, open DevTools by navigating to chrome:inspect in our browser and clicking “Open dedicated DevTools for Node.”&lt;/p&gt;

&lt;p&gt;Then, inside the terminal, run: &lt;code&gt;node --inspect-brk ./node_modules/jest/bin/jest.js src/components/testComponent/TestComponent.test.tsx --runInBand&lt;/code&gt;. Once Chrome hits hit the default breakpoint in DevTools, navigate to the profiler tab and start recording. After the test completes, stop the profiler, view the recording, and select the “chart” view.&lt;/p&gt;


  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4y0sylwjqyey4bjkof0.jpg" width="800" height="432"&gt;(Blue) Loading Jest and jest-environment-dom. (Green) Compiling TypeScript. (Red) Loading SetupTests.ts and our test file. (Yellow) Running the test.
  




&lt;p&gt;A couple of words of caution when interpreting these charts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The presence of the profiler will decrease the performance of the test by about 30%. However, it still gives a good indication of where the time is going proportionally.&lt;/li&gt;
&lt;li&gt;The first file to hit a dependency will always perform the worst because Jest will cache that dependency for all other tests on the same thread in the same run (though notably, not between separate runs). If we were to include a second test file that included TestComponent, it would take about half of the time to load its dependencies. However, that’s still time that we could reduce. And, of course, first-time performance matters a lot for the common scenario where we’re only running one file during development.&lt;/li&gt;
&lt;/ul&gt;


  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjifk9bqe0s9ucbmmlzje.jpg" width="800" height="432"&gt;Here we have two separate test files using the same dependencies, comparing the difference in loading setupTests.ts (green) and the test files’ dependencies (blue). The second test file is considerably faster as it benefits from the cache. In addition, we notice that Jest’s setup time has only occurred once.
  

&lt;h3&gt;
  
  
  Barrel files
&lt;/h3&gt;

&lt;p&gt;Now that we have the inspector hooked up, we can immediately see the problem — almost all of our time loading the test file is spent loading the &lt;code&gt;@mui/material library&lt;/code&gt;. Instead of loading only the button component we need, Jest is processing the entire library.&lt;/p&gt;


  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xb5fzkx3vomg66j5clu.jpg" width="800" height="432"&gt;Considering that all of the green happens once per thread, we’re spending a lot of our total run time just loading @mui/material (red).
  


&lt;p&gt;To understand why this is a problem, we need to understand a bit more about Barrel Files — an approach where a bunch of exports are rolled up into a single file, usually called &lt;code&gt;index.ts&lt;/code&gt;. We use barrel files to control the external interface to a component and save the consumer from worrying about a module’s internal structure and implementation. Most libraries typically have a barrel file at their root directory containing everything they export.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// @mui-material/index.ts&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Accordian&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./Alert&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./AppBar&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem is that Jest has no idea where the component we’re importing is located. The barrel file has intentionally obfuscated that fact. So when Jest hits a barrel file, it must load every export referenced inside it. This behavior quickly gets out of hand for large libraries like &lt;code&gt;@mui/material&lt;/code&gt;. We’re looking for a single button and end up loading hundreds of additional files.&lt;/p&gt;

&lt;p&gt;Fortunately, we can easily fix this problem by updating the structure of our imports to tell Jest exactly where to find the Buttoncomponent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight tsx"&gt;&lt;code&gt;&lt;span class="c1"&gt;// before&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Button&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@mui/material&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// after&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Button&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@mui/material/Button&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnoesx05eb6bmqgamjke.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnoesx05eb6bmqgamjke.jpg" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;After restructuring our imports, the impact of loading the button component is greatly reduced.
  &lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;eslint&lt;/code&gt;, we can add the following rule to our config to stop more of these imports from being added in the future.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;rules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;no-restricted-imports&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;name&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@mui/material&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;message&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Please use &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;import foo from '@mui/material/foo'&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; instead.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’m picking on &lt;code&gt;@mui/material&lt;/code&gt; here, since it’s a popular and large library. Still, it was far from the only library we were importing in a suboptimal fashion. I also had to go through and fix imports from &lt;code&gt;@mui/material-icons&lt;/code&gt;, &lt;code&gt;lodash-es&lt;/code&gt;, and &lt;code&gt;@mui-x-date-picker&lt;/code&gt; alongside some imports from our internal libraries. Combined, the impact of updating all of these imports added up to around a 50% saving in test duration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Checking setupTests.ts&lt;/strong&gt;&lt;br&gt;
There’s a temptation for the file configured against &lt;code&gt;setupFilesAfterEnv&lt;/code&gt; in &lt;code&gt;jest.config.js&lt;/code&gt; file to become a dumping ground. It tends to inherit all sorts of one-offs and edge cases people don’t want in all their test files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;    &lt;span class="nx"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;^.+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;.(ts|tsx|js|jsx)$&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ts-jest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;tsconfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;tsconfig.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="na"&gt;isolatedModules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I suspect this comes from a misconception that this file runs once before all the tests. However, so that Jest can properly isolate each test file, the contents of this file are actually run before each test file.&lt;/p&gt;

&lt;p&gt;We can see the impact of the &lt;code&gt;setupTests.ts&lt;/code&gt; file by looking at the flame charts from the previous step. It might reveal some expensive behavior in &lt;code&gt;setupTests.ts&lt;/code&gt; that can be moved back into the relevant test files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs326getgoxx08yh5aal7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs326getgoxx08yh5aal7.jpg" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;For example, testing-library/jest-dom adds about 300ms (100ms cached) to the start of each file extending Jest’s expect behavior. This library belongs in this file, and the impact is slight, but it demonstrates how quickly things can add up.
  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remove type-checking from the test runs&lt;/strong&gt;&lt;br&gt;
If we’re using &lt;code&gt;ts-jest&lt;/code&gt; to compile TypeScript for testing, then its default behavior is for the test run to also run the TypeScript compiler’s type-checks. This behavior is redundant as the TypeScript compiler will already be doing that as part of the build. Including this additional check adds a lot more time to the test run, particularly when Jest doesn’t otherwise need to fire up the TypeScript compiler.&lt;/p&gt;

&lt;p&gt;To disable this behavior, we can set the following property in our &lt;code&gt;jest.config.js&lt;/code&gt; file. The isolatedModules property is described in &lt;code&gt;ts-jest’s&lt;/code&gt; &lt;a href="https://kulshekhar.github.io/ts-jest/docs/getting-started/options/isolatedModules/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;exports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;^.+&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;.(ts|tsx|js|jsx)$&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ts-jest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="na"&gt;tsconfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;tsconfig.json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="na"&gt;isolatedModules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My experience with &lt;code&gt;isolatedModules&lt;/code&gt; has been mixed. Updating this setting has doubled performance in some legacy applications, while in some smaller &lt;code&gt;create-react-app&lt;/code&gt; applications, it hasn’t made a difference. Again, the flame charts let us see the impact of this additional work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg46nlrpfge7h5d7rdtme.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg46nlrpfge7h5d7rdtme.jpg" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;The impact of type-checking (red) dwarfs even that of loading @mui-material (green).
  &lt;/p&gt;

&lt;h3&gt;
  
  
  Checking for misconfigurations
&lt;/h3&gt;

&lt;p&gt;Performance improvements don’t have to only come from improvements to the codebase; some of the responsibility lies in how developers are using the tooling. Scripts in &lt;code&gt;package.json&lt;/code&gt; can help save typing, hide complexity, and share the best-possible cli configurations across everyone in the project. But they come with a severe downside, as over time, the team forgets how to use the CLIs of their common tools and puts too much trust in the idea that their existing scripts are already in their most optimal configuration. In most projects I have joined, the scripts in &lt;code&gt;package.json&lt;/code&gt; have had a couple of significant misconfigurations, wasting a lot of time unnecessarily.&lt;/p&gt;

&lt;p&gt;People confuse scripts originally intended for their continuous integration pipelines with scripts appropriate for their local development environment. Perhaps the scripts weren’t updated with new features and changes in the tools, or maybe they’ve just always been wrong.&lt;/p&gt;

&lt;p&gt;With Jest, there are a couple of flags to avoid for tests running locally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--maxWorkers=2&lt;/code&gt; — limits Jest to running in two threads, useful on a constrained CI build agent but not very useful on our powerful development machines that could be running Jest in 5 or 6 different threads.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--runInBand&lt;/code&gt; — similarly, this prevents Jest from using threading at all. While there are some situations where we don’t need threading, such as when we’re only running a single test file, Jest is smart enough to figure this out for itself.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--no-cache&lt;/code&gt;, &lt;code&gt;--cache=false&lt;/code&gt;, &lt;code&gt;--clearCache&lt;/code&gt; — prevents Jest from caching data between runs. Per Jest’s docs, on average, disabling the cache makes Jest at least two times slower.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--coverage&lt;/code&gt; — most local test runs don’t need to generate code coverage reports. We can save ourselves a couple of seconds by skipping this step when we don’t need it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Jest has a lot of settings, but the defaults should serve us well most of the time. It is crucial to understand the purpose behind any additional flags for the scripts in our &lt;code&gt;package.json&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Default to using watch mode
&lt;/h3&gt;

&lt;p&gt;While we’re all used to watch mode for running our application locally, it isn’t as popular for running tests. This tendency is unfortunate because, like our builds, running our tests in watch mode saves our tooling from having to recompute a lot of data. Most of Jest’s perceived slowness is in its startup time rather than the test execution, which watch mode lets us skip.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9ge5xq2dfhz9rtxx7c2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9ge5xq2dfhz9rtxx7c2.jpg" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;Similarly to how Jest caches dependencies for subsequent tests in a run, we get the same benefit for the same test rerun in watch mode.
  &lt;/p&gt;

&lt;p&gt;I suspect developers often fail to take advantage of watch mode because their IDE’s interface inadvertently encourages themnot to. When we’re working on a test file, we’re used to clicking the little green “Run test” arrows next to each test case to start a test run. They’re convenient and quicker than running all the tests or trying to remember the syntax for running a subset of tests in the CLI. Further, they display the results of the tests within our IDE’s test result panel, which is more useful than logs dumped into the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsktyz89dvo55bbeuppgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsktyz89dvo55bbeuppgy.png" alt="Image description" width="539" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With WebStorm, we can update the run configuration used by the “Run test” shortcut, letting us use them to launch the test in watch mode. We can even update Jest’s &lt;a href="https://www.jetbrains.com/help/webstorm/run-debug-configuration.html#templates" rel="noopener noreferrer"&gt;run template&lt;/a&gt; to default all “Run test” shortcuts to use watch mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r8nxb2nr8kb3dn5ly65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r8nxb2nr8kb3dn5ly65.png" alt="Image description" width="800" height="857"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xr3eqmfkifqsmbrxj5a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xr3eqmfkifqsmbrxj5a.png" alt="Image description" width="800" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  We don’t need to run all of the tests
&lt;/h3&gt;

&lt;p&gt;I’ve noticed that, unless they’re working on a single test file, developers tend to default to running all of the tests. This behavior is usually redundant, as Jest can figure out the subset of tests it needs to run based on the files that have changed. As our test suite gets grows, running the entire suite becomes unnecessarily time-consuming, though I hope the advice in this article will help limit how out of hand they get.&lt;/p&gt;

&lt;p&gt;Rather than calling jest directly, it’s a good idea to use &lt;code&gt;jest --onlyChanged&lt;/code&gt;, or &lt;code&gt;jest --changedSince&lt;/code&gt;. It might not be 100% reliable, but unless we’re committing straight to master, our Continuous Integration pipelines will catch the rare situations where Jest misses a test.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d0rofrxdm7kv73emguu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d0rofrxdm7kv73emguu.jpg" width="800" height="432"&gt;&lt;/a&gt;&lt;br&gt;The slowness of tests tends to scale exponentially as neglect sets in and people take less care of them.
  &lt;/p&gt;

&lt;p&gt;Test suites are rarely static; they grow in size along with our applications. Slow test suites are only going to get slower. Fortunately, with a small amount of work, we can more than halve the duration of each test. Not only does this action save us time now, but it changes the entire trajectory of our test suite’s duration and quality.&lt;/p&gt;

</description>
      <category>offers</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Merge Branches Sooner with Synchronous Code Review</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Mon, 21 Nov 2022 18:31:45 +0000</pubDate>
      <link>https://dev.to/twynsicle/merge-branches-sooner-with-synchronous-code-review-3l1p</link>
      <guid>https://dev.to/twynsicle/merge-branches-sooner-with-synchronous-code-review-3l1p</guid>
      <description>&lt;p&gt;Code review has the potential to be one of the most impactful activities we do in our day. It ensures our codebase remains readable and maintainable, catches bugs, spreads knowledge across the team, and increases our confidence in what we are about to release.&lt;/p&gt;

&lt;p&gt;However, poorly structured code review processes can actively harm the team and their work. When the turnaround time of a code review stretches into days, code review chokes the flow of work through the team. We still gain the benefits of review, but each review blocks follow-on work, and the team finds itself pulling in additional streams of work while they wait for the review to complete.&lt;/p&gt;

&lt;p&gt;Slow reviews create a negative feedback loop, where attempts to reduce the overhead of code review increase the size of each branch. Bigger, longer branches mean more difficult code reviews that take longer to complete. &lt;a href="https://medium.com/swlh/3-problems-to-stop-looking-for-in-code-reviews-981bb169ba8b" rel="noopener noreferrer"&gt;Feedback is delayed and comes after the developer has done enough work that pivoting is hard.&lt;/a&gt; The team takes on a lot of additional cognitive load, tracking many large, different features simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlp89qko8krmoxofvbzs.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlp89qko8krmoxofvbzs.jpeg" alt="Asynchronous Code Review involves a lot of wait time as discussion and feedback is staggered over several days. The branch takes 6 days to merge, blocking all subsequent work during this time." width="800" height="278"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Asynchronous Code Review involves a lot of wait time as discussion and feedback is staggered over several days. The branch takes 6 days to merge, blocking all subsequent work during this time.&lt;/em&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Our typical code review process is dictated by the defaults of the platform we use. We open a pull request, add reviewers, the code review platform emails them, and when they have time, they will look at the review and leave comments. These comments are sent back to the original developer via email notification, who will answer questions and make code fixes. Work continues back and forth asynchronously until the reviewers are happy for the code to be merged.&lt;/p&gt;

&lt;p&gt;Not all teams are slow at code review, but many are, with the above process typically taking 3–4 days to resolve. A lot of effort can be spent tweaking the above code review process and attempting to reduce its impact with little success: increasing the reviewer pool, increasing reminders, and adding even more processes trying to remove ambiguities.&lt;/p&gt;

&lt;p&gt;Or, we could dramatically shift how we structure code reviews — by making them synchronous.&lt;/p&gt;

&lt;p&gt;Rather than the code reviewer participating at some undefined time after the review is opened, they join the reviewee at their desk, or via video chat, and go through the review together. The review becomes a conversation, simple issues are fixed in the review, and once the reviewer is happy, the code is merged immediately. Rather than being a separate status with its own process, Code Review is a short window of peer programming that occurs as soon as the development of a task is finished.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foihz5af7gchpmgrdtnut.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foihz5af7gchpmgrdtnut.jpeg" alt="Synchronous Code Review removes the wait time, replacing it with pair programming." width="800" height="215"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Synchronous Code Review removes the wait time, replacing it with pair programming.&lt;/em&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;The most significant impact of Synchronous Code Review is that it strips almost all of the wait time from your code review process. While it might take a few hours for a reviewer to be available, once begun, most reviews complete within a quarter of an hour. Eliminating wait time decreases a branch’s total time in code review from days to minutes. Synchronous Code Reviews let us remove all the problems caused by slow code reviews.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Synchronous Code Review
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Reduced context shifting (for the reviewer)
&lt;/h4&gt;

&lt;p&gt;The cost of switching tasks is very high. After a disruption, it takes more time than we think to get back to where we were. We only get so many shifts per day before exhaustion adds up and starts decreasing our capabilities. Any team’s process needs to carefully account for the disruption and context shifting it causes the team.&lt;/p&gt;

&lt;p&gt;Traditional Asynchronous Code Reviews involve a lot of context-switching for the developer seeking the review. Every time they make fixes or reply to comments, they need to switch not only contexts but also branches, local setup, etc. The original developer has likely started a different task while they wait for the review, and wants to wait until they have reached a stopping point before resuming replying and fixing. Worse, this task is typically unrelated because the feature they were working on is blocked on the pending review. Reluctance to context-switch is often why async code reviews run for such a long time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuj5lrkinwdy4bzrb9rl.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmuj5lrkinwdy4bzrb9rl.jpeg" alt="Code reviews are often delayed because we wait for other work to reach a stopping point before returning to make fixes." width="800" height="215"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Code reviews are often delayed because we wait for other work to reach a stopping point before returning to make fixes.&lt;/em&gt;&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;Synchronous Code Reviews remove this context-switching because the developer never leaves their original context; code review is done in line with the task itself. (Except for a small wait for a reviewer to be available.)&lt;/p&gt;

&lt;h4&gt;
  
  
  Mentoring and learning
&lt;/h4&gt;

&lt;p&gt;One of the often discussed benefits of code review is the potential for knowledge sharing. Ideally, this works in both directions, with the reviewers exposed to new changes to the application and the reviewee learning about best practices, language features, design patterns, etc. However, in Asynchronous Code Reviews, we tend to lose the latter. The text-based format does not encourage additional exposition. Comments will suggest a change, but few will elaborate on how to make the change or, more importantly, why it is required.&lt;/p&gt;

&lt;p&gt;It’s often easier and quicker to explain ourselves vocally than by typing. To offer more explanation for a proposed change or to start tangential discussions based on what you’re reading. This communication works both ways, and a reviewer unfamiliar with a particular technique or pattern can ask for elaboration. Learning not just how it works, which they could garner from googling during an async review, but why the original developer chose that solution and what alternatives they considered.&lt;/p&gt;

&lt;h4&gt;
  
  
  Testing as part of the review
&lt;/h4&gt;

&lt;p&gt;In my experience, code reviewers don’t tend to run the code they are reviewing. It’s time-consuming to stash what you’re currently working on, switch branches, pull down dependencies, then figure out how to reproduce the scenario covered in the review. In Asynchonous reviewers, reviewers tend to try to minimise their own context switching, even at the cost of the quality of the review.&lt;/p&gt;

&lt;p&gt;Synchronous Code Reviews remove that barrier, as in front of you, you have the exact setup used to create what you are seeing in the code review. Access to a demo can add a surprising amount of helpful context to the review — particularly as we think about the things that weren’t done. Seeing the code alongside the demo can help identify bugs and problems that you wouldn’t have spotted from the code alone.&lt;/p&gt;

&lt;p&gt;Even if your team has dedicated testers, a quick test as part of code review will often identify issues that save a few cycles back and forth during the manual test phase.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reduced ambiguity
&lt;/h4&gt;

&lt;p&gt;Asynchronous Code Reviews end up with many ambiguities that slow down the process and create frustration, uncertainty and doubt for the individuals involved. It can often be hard to tell:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Was the reviewer happy with the response to a comment?&lt;/li&gt;
&lt;li&gt;Can I disagree with that reviewer’s feedback?&lt;/li&gt;
&lt;li&gt;Is that comment a must-fix or just a suggestion?&lt;/li&gt;
&lt;li&gt;Is the review still open because the original developer is finishing something else up before merging, or were they waiting on more feedback?&lt;/li&gt;
&lt;li&gt;Was a reviewer wanting to look at the review again? Are they happy with it, or have they forgotten about it?&lt;/li&gt;
&lt;li&gt;Does someone not want to do the review? Did they miss the email, or has it slipped their memory?
And I could go on. Work frequently get stuck in review because each participant thinks they are waiting on the other. Asynchronous Code Reviews require a lot of discussion about their status in other channels.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Synchronous Code Reviews avoid all doubt about the status of a review and can only end in one of two ways — the reviewer is happy, and the branch merges, or the reviewer pauses the review after providing a list of fixes they would like made before they return.&lt;/p&gt;

&lt;h4&gt;
  
  
  Less to keep track of
&lt;/h4&gt;

&lt;p&gt;Long-lived branches add to the team’s cognitive load. As reviewers, some attention will be taken keeping track of the discussion and changes in the review they are active in. Even if you’re not a reviewer in a pull request, you’ll often still need to be aware of the long-lived incoming branches as they affect the application you are working on.&lt;/p&gt;

&lt;p&gt;Software applications are complex enough without also having to keep track of multiple variations over time. Needing to keep being aware of how your work will mesh with other in-progress branches can be as draining as context switching.&lt;/p&gt;

&lt;p&gt;Synchronous Code Review helps here because it let branches merge as soon as development is complete. They help us get closer to the codebase having a single version, and a single source of truth. The development team no longer needs to consider code in a transient state.&lt;/p&gt;

&lt;h4&gt;
  
  
  Easier to highlight positives
&lt;/h4&gt;

&lt;p&gt;Code Reviews shouldn’t just be solely about finding problems but should also include identifying positives and expressing gratitude. Not only does this make the review experience more pleasant, but it also helps reinforce positive actions. This often takes the form of little comments such as “that’s a clean way of doing that,” “I like what you’ve done there,” and “thanks for fixing that; it’s been bugging me for a while.”&lt;/p&gt;

&lt;p&gt;In my experience, this doesn’t happen as much in Asynchronous Code Reviews. In comparison, when conducting an in-person or video call code review, positive feedback tends to happen organically.&lt;/p&gt;

&lt;h4&gt;
  
  
  Rubber-ducking
&lt;/h4&gt;

&lt;p&gt;Explaining a problem will often yield additional insights to the explainer. This effect can also occur during Synchronous Code Review; as you’re explaining your work to the reviewer, it’s not uncommon to realize something you missed. Perhaps it’s a little bit late realizing at the code review stage, but these insights are often better late than never.&lt;/p&gt;

&lt;h4&gt;
  
  
  Balancing out the downsides of Synchronous Review
&lt;/h4&gt;

&lt;p&gt;Only one reviewer&lt;br&gt;
The biggest downside of Synchronous Code Review is that it puts a hard limit on the number of reviewers. Even teams practicing Asynchronous Code Review and requiring a single approval will still tend to end up with two to three people looking at the review.&lt;/p&gt;

&lt;p&gt;The more people you have looking at the review, the more chances you have to spot issues. You get people with different perspectives, experiences, and levels of thoroughness. While a single person performing a Synchronous Code Review tends to be more effective than one person performing an Asynchronous Code Review, they will spot fewer issues than two people performing an Asynchronous Code Review.&lt;/p&gt;

&lt;p&gt;In addition, fewer reviewers means fewer opportunities to spread knowledge. Only two people on the team are now familiar with the change in the review. This limitation is less important if the rest team is clustered on the same feature since they will get to see it organically as they continue to iterate on the feature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/swlh/improving-your-teams-code-review-culture-a76cc82621e6" rel="noopener noreferrer"&gt;It’s worth considering how much pressure we’re putting on expecting perfect code reviews. Some organizations treat code reviewers as the “guardians of quality,” which tends up be an unhelpful attitude.&lt;/a&gt; Further, decreasing the quality of code reviews to increase flow will see that quality picked up elsewhere. When the team can cluster on a feature, you benefit from the aggregate of everyone’s contributions. In comparison, a blocking-heavy process ensures only one person works on a feature at a time, limiting everyone else’s involvement to solely the code review. Increasing flow lets us decrease the size of each branch and the subsequent pull request, decreasing the size and risk of each code review.&lt;/p&gt;

&lt;p&gt;Finally, it’s important to refrain from restricting who can perform code reviews. The team might be nervous about the only reviewer on a task being a junior developer. However, limiting the pool of code reviewers increases the time spent waiting for review, pushing it over the threshold where the original developer maintains their context. It limits knowledge sharing further, prevents others from learning how to code review, and breaks code review’s social contract of reciprocation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Increased Context switching for the reviewer
&lt;/h4&gt;

&lt;p&gt;A Synchronous Code review is much more disruptive for the reviewer than the async approach. Whereas async reviews can be filler activities, fitting into gaps and wait times, Synchronous Code Reviews require more time commitment, particularly if you end up pair programming fixes. Additionally, Synchronous Code Reviews operate on a much shorter time frame, as a reviewer, you need to get to the review within a couple of hours, whereas async gives you a couple of days — even though it really shouldn’t.&lt;/p&gt;

&lt;p&gt;As reduced code review turnaround decreases the duration of our branches and lets us create smaller and more frequent code reviews, we also need to balance this against the additional disruption this causes. Working in small batches is generally good, but getting too small can be disruptive. Finding the right balance is something the team will need to discover as they go.&lt;/p&gt;

&lt;p&gt;There are many ways to reduce the impact of context-switching — as a reviewer, give estimates of when you will be ready, “I’ll come over at one” will give a lot more information and time to prepare than “I’ll be free sometime later this afternoon.” Reviewees should be as prepared as possible, read through everything in your code review at least once, make sure the branch is up to date from the main branch, check that continuous integration is passing, and have a demo ready to go.&lt;/p&gt;

&lt;h4&gt;
  
  
  Letting explanations replace self-evident code
&lt;/h4&gt;

&lt;p&gt;In a Synchronous Code Review, reviewers will tend to give the reviewer a guided tour of the code. Walking them through what they have done and highlighting critical areas before letting the code reviewer scroll through at their own pace. Taking this additional time makes the review quicker and more accessible for the reviewer.&lt;/p&gt;

&lt;p&gt;However, we need to be careful, code needs to be self-explanatory. Someone should be to be able to read this code a year later and be able to understand it without needing to ask the author for explanation. Asynchronous Code Review doubles as a test of how self-documenting and clear the code is. Your work needs to make sense with no additional context to pass the review.&lt;/p&gt;

&lt;p&gt;When performing a Synchronous Code Review, we must be careful that the code is still clear without explanation. It’s helpful to hear the walkthrough, but at the same time, we need to be careful that the walkthrough isn’t replacing good code. As a general rule of thumb, if the reviewer needs to ask why something was done, or how something works, it should be a hint that additional clarification is needed. Done carefully, Synchronous Code Reviews can be beneficial for code quality because they let developers hear in real time what code causes confusion, is harder to parse, or is not immediately apparent.&lt;/p&gt;

&lt;h4&gt;
  
  
  No written record of the discussion
&lt;/h4&gt;

&lt;p&gt;In theory, Asynchronous Code Reviews mean that the history of the review’s discussion is retained in the repository. Switching to Synchronous Code Reviews will mean you lose that history.&lt;/p&gt;

&lt;p&gt;In reality, the conversation tends to leak out, with developers discussing particularly tricky things in person or via video. Additionally, it’s rare that we go back and look at old review discussions, so I’m unsure of the value we are losing here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other advice
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Retain the ability to do Asynchronous Code Reviews
&lt;/h4&gt;

&lt;p&gt;Sync code reviews could allow you to remove the Code Review status from Jira entirely. Code Review is no longer a separate step that needs tracking and instead folds into the end of the “in dev” status. However, it’s still be worth retaining the ability to do Asynchronous Code Reviews when required.&lt;/p&gt;

&lt;p&gt;There will be special cases that warrant async reviews. You may need feedback from other teams, you need feedback from multiple people, or key people are away.&lt;/p&gt;

&lt;p&gt;Small reviews that are non-blocking might not be worth the disruption of a Synchronous Code Review. Similarly, work that is simple, non-controversial, and likely to elicit little feedback doesn’t have much to gain from Synchronous Code Reviews.&lt;/p&gt;

&lt;p&gt;And, of course, nothing says you need to commit fully to either approach. You could have a mix and let the developer creating the review decide whether a Synchronous or Asynchronous review is more appropriate.&lt;/p&gt;

&lt;h4&gt;
  
  
  Don’t try to fix too much within the review
&lt;/h4&gt;

&lt;p&gt;There’s a careful balance to deciding what changes can be fixed in the review and what changes the reviewer should return for. Returning to a review later, after fixes are made, is another context switch for the reviewer. Trying to do too much within the review consumes more of the reviewer’s time, and can lead to rushed changes.&lt;/p&gt;

&lt;p&gt;However, this is another problem that should reduce as the team increases its flow. Smaller, more frequent code reviews that happen sooner in the development of a feature will tend to need fewer fixes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Spread the code review load equally
&lt;/h4&gt;

&lt;p&gt;Avoid ending up with a single person doing most of the reviews. Not only does it consume a lot of their time, but it also limits the spread of knowledge across the team. Everyone in your team should be able to review everyone else, regardless of experience or specialization. In most situations, people should request reviews from the entire team rather than directly asking a single reviewer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Switch up who is driving
&lt;/h4&gt;

&lt;p&gt;During the review, both the reviewee and reviewer should have access to the keyboard and mouse. A review might start with the reviewee describing the problem they were tackling and giving a brief walkthrough of how they solved it. Control should then switch to the reviewer so they can read through the code at their own pace. If fixes are required, control might switch back and forth as changes are made. The demo, which could make sense at any point in the review, should be driven by both participants — the reviewee might want to demonstrate specific scenarios, while the reviewer will want to check for scenarios they think the reviewee might have missed.&lt;/p&gt;

&lt;p&gt;When performing an in-person review, switching control back and forth happens organically. However, during a video call, you will need to be more mindful of letting control switch back and forth.&lt;/p&gt;

&lt;h4&gt;
  
  
  Where to use Synchronous Code Review
&lt;/h4&gt;

&lt;p&gt;Neither synchronous nor asynchronous is clearly better than the other. For everything you gain from Synchronous Code Review, there are a lot of downsides to consider. These problems eventually shrink as an increase in flow lets you shrink the size of your branches and reviews. The downsides can also be mitigated by choosing the right approach for the team and situation.&lt;/p&gt;

&lt;p&gt;For example, Synchronous Code Review works best in the following situations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The team tends to overlap and cluster on features. Having fewer reviewers matters less because the entire team is working in that area. For this team, the cost of blocking branches is much higher than for a spread-out team.&lt;/li&gt;
&lt;li&gt;The team already has a high degree of communication and may lean towards extraversion. The team has high trust and is comfortable giving honest feedback to each other.&lt;/li&gt;
&lt;li&gt;Everyone brings the right attitude to code reviews—everyone is willing to set their ego aside and spend the time required to ensure the long-term health of the codebase.&lt;/li&gt;
&lt;li&gt;The team has a lot of juniors and intermediate developers who would benefit from the extra mentoring opportunities afforded by Synchronous Code Review.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;To summarize,&lt;/p&gt;

&lt;p&gt;For reviewers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set reasonable estimations for when you will be ready to join a review, don’t drop what you’re doing to do a review.&lt;/li&gt;
&lt;li&gt;Know when to walk away from the review and let the developer make changes by themselves.&lt;/li&gt;
&lt;li&gt;Talk a lot, give positive feedback, and call out when something is difficult to parse. Explain why something is confusing. Explain why you want a change made. Go on educational tangents.&lt;/li&gt;
&lt;li&gt;Ensure you have access, at the appropriate moments, to the keyboard and mouse so that you can go through the code at your own pace and test scenarios yourself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For reviewees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pay attention to when the reviewer needs help understanding what they are reading, don’t just explain in person—add a comment, or refactor the code to clear the confusion.&lt;/li&gt;
&lt;li&gt;Be comfortable giving the reviewer time to read quietly without interruption.&lt;/li&gt;
&lt;li&gt;Be prepared to wait several hours for the reviewer to get to you, have another task with low context-switching impact to move onto in the meantime.&lt;/li&gt;
&lt;li&gt;As you explain your review, you will occasionally realize something you missed. It happens more often than you think, don’t be embarrassed or try to hide it. Fix it in the review, or ask the reviewer to return later.&lt;/li&gt;
&lt;li&gt;In most cases, request reviews from your team channel rather than from a specific reviewer.&lt;/li&gt;
&lt;li&gt;Be kind to the reviewer, be as prepared as possible, and have a demo ready.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>fullstack</category>
      <category>community</category>
    </item>
    <item>
      <title>Does Your Team Manual Test Before or After Merge?</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Fri, 05 Feb 2021 18:00:52 +0000</pubDate>
      <link>https://dev.to/twynsicle/does-your-team-manual-test-before-or-after-merge-b4n</link>
      <guid>https://dev.to/twynsicle/does-your-team-manual-test-before-or-after-merge-b4n</guid>
      <description>&lt;p&gt;Testing before merging helps ensure that the main branch is always in a good releasable state. However, it also comes at the high cost of keeping each branch alive for an extra couple of days.&lt;/p&gt;

&lt;p&gt;In my experience, the longer a branch stays stuck, the greater the hassle for the developers: more difficult merges, more simultaneous mental models of the application to switch between, increased difficulty sharing code. Inevitably, the developers start creating branches off of branches to simplify their situation.&lt;/p&gt;

&lt;p&gt;This problem recently came to a head where every bug the testers found had already been identified and fixed in a subsequent branch. While each branch was related to an independent story, since we were creating a new application that was still in its early stages, each change ended up overlapping the same regions anyway.&lt;/p&gt;

&lt;p&gt;So the question the team is wondering, is the hassle of testing in-branch really worth it?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>testing</category>
    </item>
    <item>
      <title>The bugs that shouldn't be in your bug backlog</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Thu, 25 Jun 2020 19:20:29 +0000</pubDate>
      <link>https://dev.to/twynsicle/the-bugs-that-shouldn-t-be-in-your-bug-backlog-3e48</link>
      <guid>https://dev.to/twynsicle/the-bugs-that-shouldn-t-be-in-your-bug-backlog-3e48</guid>
      <description>&lt;p&gt;There is never enough time to fix all of our application's bugs. So everything we don't fix gets moved to the bug backlog, a list somewhere within our ticket tracking software. If a bug isn't critical enough to be fixed straight away, it ends up in the backlog. If a bug report is ambiguous, missing vital information, or making odd requests, it ends up in the backlog. All of the bugs that no one wants to fix, but no one wants to be the one to close either, end up in the backlog.&lt;/p&gt;

&lt;p&gt;Bug backlogs are never easy to work with, and their nature tends towards helping accumulate bug tickets rather than helping anyone resolve their contents. Many of the tickets that end up in the bug backlog are not worth fixing. They are not ready to start or are of dubious value. They need to be furnished with more information, triaged, closed, or moved to a more appropriate location. Keeping them as-is will ensure that they don't get fixed, and will make it harder to address the real bugs in the backlog. If you're a developer scrolling through the list, you will avoid these bugs; if you're someone who has decided to take charge of the backlog, you should be dealing with these first. What follows is a list of the types of bugs that shouldn't be "sitting" in your bug backlog.&lt;/p&gt;

&lt;h4&gt;
  
  
  The "just in case it happens again" bug
&lt;/h4&gt;

&lt;p&gt;No one is sure how to reproduce this issue, and there isn't enough information to even start looking. This type of bug ticket is kept alive in the hope that it will happen again, and the second time around, someone will uncover a bit more information. These bugs stay open for years without addition, taking up space and adding noise. However, closed bugs are still searchable, and closed bugs can be re-opened, so there is no real reason to leave them open.&lt;/p&gt;

&lt;p&gt;Diagnosis: Remove.&lt;/p&gt;

&lt;h4&gt;
  
  
  The old bug
&lt;/h4&gt;

&lt;p&gt;If you have never taken steps to organize, triage, or clean your bug backlog, your backlog will have bugs as old as your application itself. If a bug has gone so long without being fixed, its ticket is probably low quality and meets some of the other categories on this list. Regardless, it should be closed. If no one has reraised the bug in the last year, it can't have been that important. The bug might have been resolved under another ticket, or a refactor or redesign has rendered it irrelevant. Resources and attachments, such as log files, test files, and videos, may have gone missing. The report is no longer reliable, and the expected outcome may be incorrect as the application has changed. &lt;/p&gt;

&lt;p&gt;Diagnosis: Remove.&lt;/p&gt;

&lt;h4&gt;
  
  
  The duplicate
&lt;/h4&gt;

&lt;p&gt;The longer the bug list, the harder it is to check for duplicates, and the more duplicates make their way into the bug list. They add noise and split information between tickets. A ticket will often be picked up and resolved, but all of its duplicates will remain, creating confusion when later attempts to reproduce them fail.&lt;/p&gt;

&lt;p&gt;Diagnosis: Track down all of the duplicates and combine them into a single ticket.&lt;/p&gt;

&lt;h4&gt;
  
  
  The half-discussed bug
&lt;/h4&gt;

&lt;p&gt;The comments on this bug show that an extended discussion took place six months ago without a resolution. Perhaps the expected outcome was unsure, or there was controversy around the best way to solve it. It is unclear not only what is expected, but also whether it still belongs to any of the people discussing it. The discussion may have been so detailed and fraught as to suggest that this bug will require a lot of specialist knowledge in the affected area of the application.&lt;/p&gt;

&lt;p&gt;Diagnosis: Assign this ticket to someone involved in the discussion to make a final decision.&lt;/p&gt;

&lt;h4&gt;
  
  
  The tacked-on bug
&lt;/h4&gt;

&lt;p&gt;The comments on the bug ticket detail a new bug, unrelated to the original bug described in the details. Perhaps someone fixed the original bug, but subsequent testing has revealed a new error. It can be confusing to determine what problem you need to address—has the bug ticket been left open because of the new bug, or do both still need fixing? Is whoever fixed the original bug working on the second? Further, the "tacked-on" bug makes it complicated to determine the severity and priority of this issue, as it might not match the original's severity and priority.&lt;/p&gt;

&lt;p&gt;Diagnosis: Split the ticket into individual tickets.&lt;/p&gt;

&lt;h4&gt;
  
  
  The bug that should be a feature
&lt;/h4&gt;

&lt;p&gt;This issue meets the definition of a bug until it comes time to fix it. The solution required is more akin to developing a new feature than fixing a bug. There might be complicated requirements to decide on and enumerate, or new UI designed, or new architecture created to solve the problem. Whereas an ordinary bug might need input from developers, testers, and product owners, this bug might need input from UI/UX, technical leads, DevOps, and other specialists. Rather than addressing it as a bug, it might make more sense to put the problem through your team's more powerful and robust feature pipeline.&lt;/p&gt;

&lt;p&gt;Diagnosis: Move to your feature backlog.&lt;/p&gt;

&lt;h4&gt;
  
  
  The feature request
&lt;/h4&gt;

&lt;p&gt;While some bugs need to be treated more like features, some features manage to make their way into the bug list. The reporter is either misunderstanding the existing behavior or accidentally requesting new behavior. Either way, there is nothing incorrect that needs fixing. These bugs are problematic because they skip the proper channels for feature work. Features requested via the bug backlog can skip the product owner, UI/UX, and everyone else who should be involved in feature creation and prioritization. &lt;/p&gt;

&lt;p&gt;Diagnosis: Move to your feature backlog.&lt;/p&gt;

&lt;h4&gt;
  
  
  The bug with missing fields
&lt;/h4&gt;

&lt;p&gt;Without all of the information at hand, it can be challenging to determine whether this bug is appropriate to pick up. It might be missing replication steps, priority and severity, an expected outcome, or labels that indicate the part of the application affected. You might go ahead and start it anyway, wasting hours or days gathering the details yourself, only to find that bug has no value or never occurs in the wild. Maybe no one wants it completed, or it would take longer to complete than you have time available. You might find the bug's source is not where you first expected, and that you just aren't familiar enough with the affected region of the application to create a good fix.&lt;/p&gt;

&lt;p&gt;Diagnosis: Either you or the original reporter need to populate the missing fields.&lt;/p&gt;

&lt;h4&gt;
  
  
  The high-priority bug
&lt;/h4&gt;

&lt;p&gt;When your bug backlog gets large and disorganized enough, bugs that are of critical importance can end up getting lost. Perhaps the tickets don't have enough information, no one realizes just how important they are, the noise of other bugs drowned them out, or they were waiting for someone to furnish them with extra explanation or details that never came. Whatever the cause, these sorts of bugs need resolving, and given their urgency; the bug backlog is the wrong place for them.&lt;/p&gt;

&lt;p&gt;Diagnosis: Move this bug to your high-priority bug list or assign it directly to a team.&lt;/p&gt;

&lt;h4&gt;
  
  
  The low-value bug
&lt;/h4&gt;

&lt;p&gt;Some bugs in the backlog are truly worthless. They might never affect anyone, or their impact is too small to be noticeable. The path to replicate them might be convoluted, or there are already other measures to prevent them from causing harm. The cost of fixing these bugs will outweigh any value that you could obtain by resolving them. Perhaps these bugs were originally logged when a feature was still in development, when the feature was fresh in everyone's minds, and bugs were comparatively easy to fix. However, time has moved on, and the scales have shifted; the bugs have remained low value, but have become disproportionately time-consuming and difficult to resolve.&lt;/p&gt;

&lt;p&gt;Diagnosis: Remove.&lt;/p&gt;

&lt;h4&gt;
  
  
  The blocked bug
&lt;/h4&gt;

&lt;p&gt;Many bugs end up with a blocked status without any indication of why. Once blocked, a bug is unlikely to become unblocked, as we eventually lose track of what initially stopped it. Further, a bug buried in the bug backlog isn't helping to resolve the blocker. In my experience, resolving blockers requires active attention, which doesn't happen when the blocked items languish deep in a list no one is reading.&lt;/p&gt;

&lt;p&gt;Diagnosis: This one can be tricky; you may need to create a process for surfacing blocked bugs.&lt;/p&gt;

&lt;h4&gt;
  
  
  The bug that everyone has picked up and put back down
&lt;/h4&gt;

&lt;p&gt;Some bugs are particularly intractable. They might be challenging to reproduce or fix. They may take more time to resolve than anyone has to spare. Perhaps no one could find an owner to decide what the expected outcome should be, or a bug is also guilty of some of the other criteria in this list. Over time, half of the development team has had an attempt, before admitting defeat and returning the bug to the backlog. Determining whether a bug has gotten into this state can be difficult, as each individual's investigation and subsequent retreat will often occur before they leave any fingerprints on the bug ticket. Regardless of the cause, this bug has chewed up the time of many separate investigations, but it is unlikely to be solvable in its current state. &lt;/p&gt;

&lt;p&gt;Diagnosis: Someone needs to decide whether this bug is still valuable. If so, you will need to make space in a team's capacity to accommodate it.&lt;/p&gt;

&lt;h4&gt;
  
  
  The "clean up this code" ticket
&lt;/h4&gt;

&lt;p&gt;Few teams have effective methods for tracking technical debt, and when it does get written down, you tend to find it recorded in strange and inappropriate places, including as bugs within the bug backlog.&lt;/p&gt;

&lt;p&gt;Diagnosis: Find a better method of tracking debt and move the ticket there.&lt;/p&gt;

&lt;h4&gt;
  
  
  The bug that is not worth the damage to fix
&lt;/h4&gt;

&lt;p&gt;Thankfully a rare occurrence, this bug is seldom worth fixing. The replication steps may be convoluted or unlikely to occur, and without disproportionate restructuring, the solution will cause significant damage to your codebase. The hack or workaround required to fix this bug will place overhead on all future development and testing. You need to make a judgment call whether the impact is worth it. &lt;/p&gt;

&lt;p&gt;Diagnosis: Put some logging in place to gauge how often it is happening in the wild before fixing it.&lt;/p&gt;

&lt;h4&gt;
  
  
  The bug that belongs to a different team
&lt;/h4&gt;

&lt;p&gt;When you have multiple teams working on the same product, each team will end up with areas of the application that they have specific or recent experience working with. In this situation, the teams might also end up sharing the same bug backlog. However, in this configuration, each team can pick up only a fraction of the backlog suitable for their skillset. The more teams sharing a backlog, the harder it is to find appropriate work in the shared backlog, and the higher the chance of accidentally starting work on a bug that is outside of your individual experience and knowledge.&lt;/p&gt;

&lt;p&gt;Diagnosis: This is a systemic issue; either create multiple separate bug backlogs or figure out a labeling system.&lt;/p&gt;

&lt;h4&gt;
  
  
  The predicting-the-future bug
&lt;/h4&gt;

&lt;p&gt;Rather than describing a current problem, these bugs make some predictions about the future state of the application and extrapolate the problems that might occur. Typically they linger in the bug backlog in the hope that they will one day be relevant. "We need to make x more robust since it will cause us problems once we move to y." "If we ever make this API public, then z will cause us problems." These bugs tend to be political statements about the future direction of the application rather than actual problems that need fixing. Regardless, when and if you make any significant changes in the future, you will probably rediscover these issues anyway, so there is little value in keeping them around.&lt;/p&gt;

&lt;p&gt;Diagnosis: Remove.&lt;/p&gt;

&lt;h4&gt;
  
  
  The copy-and-paste from a crash reporter
&lt;/h4&gt;

&lt;p&gt;The descriptions in the tickets for these bugs are just a stack trace copy-pasted from one of your application's third-party crash reporters. By itself, a stack trace alone isn't enough to go on; most of the stack trace will just be a list of system calls. To fix the underlying bug, you need to investigate and address the exceptions and logs accumulating in your analytics tools. However, creating a bug ticket with insufficient information that sits in the backlog for six months won't achieve this.&lt;/p&gt;

&lt;p&gt;Diagnosis: Remove.&lt;/p&gt;

&lt;h4&gt;
  
  
  The bug that was for a moment in time
&lt;/h4&gt;

&lt;p&gt;We don't report bugs assuming that it will take a year or more to resolve them. Instead, we create bug reports with some assumptions about the current state and priorities of the project. Perhaps a ticket was created thinking that the creator or their team was also going to pick it up, but it fell by the wayside. Perhaps the bug was addressing something topical at the time it was created but is now a low priority. Once they have sat in the queue for long enough, bugs will go out of date. &lt;/p&gt;

&lt;p&gt;Diagnosis: Verify whether the ticket is still useful and relevant.&lt;/p&gt;

&lt;h3&gt;
  
  
  The problems of a disordered backlog
&lt;/h3&gt;

&lt;p&gt;Now that we understand the sorts of bugs that cause us trouble, it is worth elaborating on why they don't belong in your bug backlog.&lt;/p&gt;

&lt;p&gt;Typically, we interact with the bug backlog in one of two ways: either we ignore it and pretend it doesn't exist, or we turn to it as a source of work when there is a lull in feature development and priority bugs. We're looking for something we can complete in a fixed timeframe to keep us busy until the next piece of work is ready to start. However, this approach does not line up with what is in the backlog. It is easy to spend all of the time we're trying to fill with searching through the backlog and doing light investigations just to find something we can pick up. Even when we finally settle on something, we're frequently caught out by ambiguous bugs that are much more time consuming than anticipated, disrupting our other work for weeks. Alternatively, bugs will require input from other roles, such as product owners clarifying outcomes, or testers helping us replicate the issues. But in doing this, we accidentally obligate additional work to the same people who are already flat out preparing the next round of feature work for us.&lt;/p&gt;

&lt;p&gt;Without prioritization or order, it is easy to think that all bugs in the backlog have an equal priority. Even though there should be no high-priority bugs in it, the backlog will still contain a variety of priorities from unimportant to moderate. Regardless, the developers will often choose to go after the lowest-hanging fruit first, working on the interesting or easy bugs of low importance over those of moderate priority. Alternatively, we spend too much time searching through the backlog looking for something that we perceive has value, forgetting that assigning value and priority is not our role.&lt;/p&gt;

&lt;p&gt;Bug backlogs can be a significant source of wasted time. It is easy to fall into the trap of starting work on a bug, putting it back because it doesn't meet some start criteria, then having another developer come along and repeat the process. Even when we keep going, we often need to choose between writing off the time already spent versus the additional time it will take to get up to speed in an area of the application that is outside our specialties and responsibilities. Bugs are not free to fix. Even when we don't have feature development or pressing deadlines to focus on, low-priority bugs compete for our time with other tasks like paying down technical debt, automation and quality of life improvements, preparing for future work, and upskilling.&lt;/p&gt;

&lt;p&gt;The final risk of an unmaintained backlog is that, over time, developers start to lose faith in the value of fixing the bugs in the backlog. When we feel we are not creating any value and no one cares about the trivial problems we are fixing, it is that much easier to pretend the bug backlog doesn't exist.&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking back control
&lt;/h3&gt;

&lt;p&gt;To get the bug backlog to a state where it isn't standing between your team and their ability to fix bugs, you have several options. First off, someone can go on a culling spree, closing the majority of the bugs in the backlog, and resetting it to a manageable size. However, depending on the amount of rationale needed to close a bug, this can take some time, and it does little to prevent the problem from reemerging in the future. Alternatively, you can shift your team's culture and empower everyone to feel they can close bugs or recommend they be closed. Too often, we worry about making a bad call or offending whoever opened the bug. However, the best approach I have been a part of is to have someone take ownership of the bug backlog. The new owner can hold regular triages, bounce any bugs that don't meet the standard back to their creator and direct each bug towards the team best equipped to resolve it. The net result will be a couple of people spending a bit more time in meetings, but overall everyone spends a lot less time dealing with the backlog.&lt;/p&gt;

&lt;p&gt;Even in companies with otherwise high levels of process and organization, I have seen the bug backlog treated as an afterthought. Either we delegated resolving its contents to a fictional future time, where we would have all of the time we needed, or we had expectations about tackling the backlog in real-time that simply failed to line up with reality. It's naive to think that a developer has the free time to work on unimportant bugs. For any bug within the backlog to be completed, it needs to be as easy and fast as possible to complete - whether that means improving the quality of the bug tickets, or cleaning out the noise, time-wasters, and general trash.&lt;/p&gt;

</description>
      <category>productivity</category>
    </item>
    <item>
      <title>3 Traps That Lead Developers to Stop Learning</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Tue, 19 May 2020 18:32:00 +0000</pubDate>
      <link>https://dev.to/twynsicle/3-traps-that-lead-developers-to-stop-learning-jg8</link>
      <guid>https://dev.to/twynsicle/3-traps-that-lead-developers-to-stop-learning-jg8</guid>
      <description>&lt;p&gt;As software developers, learning is a vital part of our role. We work in an ever-changing field where new technologies and ideas are continually being introduced. The high level of complexity that we deal with means there is always a better way of tackling a given problem. We wear so many hats and need so many different skills that it can be challenging just keeping up to date, let alone expanding our knowledge. &lt;/p&gt;

&lt;p&gt;Despite the dynamic nature of our chosen field, I often see developers that don't feel like they are learning as much as they want to be. They don't feel they have sufficient opportunities to keep their skills relevant and up to date, or they have exhausted what is available. They can grow frustrated and become passive, delaying growth, and waiting for the right set of circumstances to present itself.&lt;/p&gt;

&lt;p&gt;It seems that many developers have accumulated misconceptions about learning in the working world. Perhaps we failed to transition from the methods that were effective while we were learning at university. We mistake thinking that learning has to occur in our own time and wait for the time, energy, and motivation to start a side project. We think that our manager should be providing us with training resources and a learning plan. We believe that the only things worth learning are the newest and shiniest technologies, and so wait for our company to start adopting them before we resume learning.&lt;/p&gt;

&lt;p&gt;In saying this, I don't want to tell you how you should or shouldn't learn. Nonetheless, it is crucial to recognize when you want to learn but are feeling stuck, looking in the wrong places, or stressing and burning yourself out spinning your wheels going nowhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thinking that learning has to happen outside of work
&lt;/h3&gt;

&lt;p&gt;We have become surrounded by this mythos of coding in our personal time. Blog posts present aspirational stories of what the authors have been able to put together. Tales of people who've built their own company from a side-project. Open source projects that we use daily that are built and maintained in someone's own time. Our co-workers show off their side-projects and participate in programming competitions. Online threads perpetuate the idea that 'the best' developers also code at home. The recent trend of listicles titled "Five side projects ideas that will turn you into a great developer!." We treat writing software as a hobby, as well as a job. Finally, add to the mix a tendency towards individualism and self-reliance, and the result is the idea that we should be coding, learning, and working all hours of the day. &lt;/p&gt;

&lt;p&gt;Now, don't get me wrong; personal projects can be very satisfying. You get to build things the way that you want to. You can tinker and hack away at interesting problems. You get to build something that makes your life easier or gives back to the community. Perhaps you're working in a terrible dead-end job, and writing code at home is a way to remind yourself that you still enjoy creating software. And when it is finally time to change jobs, we need to deal with fussy recruiters who expect us to come pre-loaded with experience in a long list of technologies that we have never used.&lt;/p&gt;

&lt;p&gt;But what we miss is that the people doing all of these things in their own time are usually doing it because they enjoy it. That's not to say the rest of us don't like developing software, but maybe eight hours a day is enough, or what we enjoy are teamwork and collaboration. We each have hobbies and interests outside of software and code.&lt;/p&gt;

&lt;p&gt;The trap we risk falling into is attempting to adopt two opposing viewpoints: that learning and growth can only occur in our own time, and that coding is the last thing we want to do in our free time. This contradiction paralyzes us. Either we give up learning, or it leads us in the wrong direction: rather than looking for opportunities for learning and growth that can happen at work, we try to figure out how to convince ourselves to start a side project. I have fallen into this situation more times than I can count and have a directory littered with side projects that I started because I thought I should, but without having the time or motivation to make any real progress.&lt;/p&gt;

&lt;p&gt;Finally, I suspect we also overestimate the value of learning from home. At work, we are working on a decently sized project; there are many good examples available to learn from, and its size provides us with lots of challenging design and structural problems to tackle. We have co-workers with knowledge and experience that we can leverage, and who can review our code. What we are learning can be immediately applied. We can spend more time getting things correct, as there is usually a lower tolerance for bugs and technical debt. Compare this to personal projects, which can take a long time before they get big enough that we are learning something besides rote memorization of method names and project structure; where, when you need help, we can spend hours on StackOverflow, trying to get unstuck.&lt;/p&gt;

&lt;h3&gt;
  
  
  Expecting our managers to organize our learning
&lt;/h3&gt;

&lt;p&gt;The pendulum can swing too far in the opposite direction, where we decide not only do we not want to learn at home but that we don't want to be responsible for our learning at all. We adopt the view that education should be the responsibility of our employer and be in the form of training courses, conferences, Pluralsight subscriptions, or allocated "learning time" on the job. We expect that their managers are responsible for organizing training, initiatives, and learning plans, or introducing new technologies into the workplace for the sake of our learning. When this inevitably doesn't happen, we shift into holding mode, waiting for our manager to see the value of education, or to open up the purse strings.&lt;/p&gt;

&lt;p&gt;Now, I don't want to suggest letting managers off the hook entirely here, as they do play an essential role. But rather than by directly organizing learning plans and training resources, they can facilitate learning by influencing the culture and values of the development teams. They can create a 'learning culture.' An environment where finding the best solution is encouraged, even when it requires some additional time upfront to learn or research alternatives. An environment that encourages employees to take the initiative to share knowledge. Where 10% time is seen as valuable, not just for paying down technical debt, but because of the learning opportunities it provides. Where teams are trusted to acquire the knowledge they need in the ways that they see fit.&lt;/p&gt;

&lt;p&gt;Regardless of the availability of learning resources or the presence of a learning culture, it is essential to recognize whether you have put your continuing growth on hold. Have you have elected to remain stationary while you wait for others to take action for you, or for the right situation to present itself?&lt;/p&gt;

&lt;h3&gt;
  
  
  Placing too much emphasis on learning frameworks and languages
&lt;/h3&gt;

&lt;p&gt;The third mistake that I see developers make when thinking about learning is to place all of our focus on tools and technologies. By focusing on the new and shiny, we devalue the technology stack we are already using. Rather than fixing our current problems, we fixate on the idea of rewriting and upgrading to the very newest, holding off learning until this happens. With so many tools we could learn, we underestimate what a narrow part of our discipline learning new technologies covers. When we prepare technologies we want to use in the workplace ahead of time, we risk making bad decisions and introducing inconsistency that makes life at work more difficult.&lt;/p&gt;

&lt;p&gt;At least initially, when learning a new language and framework, we stay at the surface level, memorizing the expected structures and the names of common methods and APIs, copy-pasting pieces together out of tutorials and Stack Overflow comments. It takes a while before we get deep enough to start thinking about structure or design or tackling the tricky problems. As with personal projects, it takes time to reach the threshold where learning a new tool becomes worthwhile. As we flit about, moving from framework to framework, we risk staying at this surface level, feeling like we are being productive, but never really getting deep enough to take anything away from what we are doing. We get stuck, continually learning new things, but never really getting anywhere.&lt;/p&gt;

&lt;p&gt;When we fall into the trap of focusing on the new and shiny, we end up warping our understanding of software development. I have worked with many developers who become frustrated when they don't get to work on the hottest new technologies. They think that working on or mastering even slightly old technologies is a waste of time that will cause them to get left behind and hurt their careers. This attitude can also lead them down the path of thinking the grass is always greener, that the current solution is unfixable or too hard to work with, and that we need to tear it all down and start again. We think that problems are solved by switching to the newest technology rather than by polishing what we already have. When developers only want to learn new technologies, and when the only options at the workplace are slightly old, they decide to give up on learning instead.&lt;/p&gt;

&lt;p&gt;When your team does finally start a rewrite or a new greenfield project, it is worth considering just how useful it is to be "pre-prepared," having learned some frameworks and tools ahead of time. While there is a lot of value having someone with some experience and the ability to help everyone else get up to speed, this foreknowledge can also risk anchoring decision making. Frameworks and technologies aren't just a matter of taste. They come loaded with enough differences and considerations that making the correct choice is too crucial to be swayed by what a single person chose to learn in advance. It is vital to choose the most appropriate tool for the problem and the entire team. &lt;/p&gt;

&lt;p&gt;The reality is that no technology stays current for long, and something newer and better will always be around the corner. In any case, as you grow in experience, learning new frameworks and languages becomes trivial. There is little need to prepare in advance once you get to the point where you can become productive with a new tool or technology in a matter of hours or days. Further, you start to recognize that most transferable pieces are the hard bits: the things you only learn once you have spent the time mastering your current tools.&lt;/p&gt;

&lt;p&gt;Once we recognize these traps, we can realize that learning doesn't require our personal time, our manager to be our teacher, or for us to keep up to date with the latest and greatest technology stacks. Instead, we can take advantage of our co-workers' knowledge, improve our skills in the technologies we are already using, work on our soft skills, or find areas of our current application that need some love and attention. Looking in the right places, we can always find something new to learn or an existing skill to practice and recognize the opportunities that already surround us.&lt;/p&gt;

</description>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>3 Problems to Stop Looking For in Code Reviews</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Fri, 03 Jan 2020 18:00:19 +0000</pubDate>
      <link>https://dev.to/twynsicle/3-problems-to-stop-looking-for-in-code-reviews-44he</link>
      <guid>https://dev.to/twynsicle/3-problems-to-stop-looking-for-in-code-reviews-44he</guid>
      <description>&lt;p&gt;Reviewing code is one of the most valuable tasks we do each day as a software developer. In that one activity, we spot problems and preempt issues before they can grow. We learn new approaches and gain familiarity with features we might have to update or borrow from in the future. We provide guidance and teaching, ensure quality and maintainability, and keep an eye on the future direction of the codebase.&lt;/p&gt;

&lt;p&gt;But with code review doing so much for us, we can overload it, give it too much responsibility, accumulate tasks and checks that belong elsewhere. By treating code reviews as the final gate work passes through, we risk it also becoming the ambulance at the bottom of the cliff. Code review becomes time-consuming, problems slip through the cracks, and time gets wasted doing rework. &lt;/p&gt;

&lt;h2&gt;
  
  
  The types of issues you could discover
&lt;/h2&gt;

&lt;p&gt;Many of these issues only come to light during code review, but proper planning and tooling could catch them much earlier, especially the 'easy ones'.&lt;/p&gt;

&lt;h4&gt;
  
  
  Easy issues
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Lint&lt;/strong&gt; - Simple or common mistakes; not meeting best practice or the recommended style for the language or framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coding Standards&lt;/strong&gt; - Does your team or company have a list of rules or guidelines for writing code? Does this code follow them?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt; - If your coding standard requires comments, are they present? Are the comments correct and add value? Has someone copy-pasted a block of code and forgotten to update the comments?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Style consistency&lt;/strong&gt; - Does this code look like the code around it?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Language conventions&lt;/strong&gt; - Is the code you are writing appropriate for the current language and framework?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code mistakenly left in&lt;/strong&gt; - Unused variables, TODOs that still need doing. Code that has been commented out, but not removed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Formatting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spelling&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Medium issues
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Readability and maintainability&lt;/strong&gt; - Are you going to be able to understand this in 6 months? Can you maintain and update this code if the original author has left the company? &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naming&lt;/strong&gt; - Are functions/methods/classes well named? Is it clear what they are doing? Can someone understand what that method does without reading the comments?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coverage&lt;/strong&gt; - Do unit tests cover the new functionality? Can you think of any significant use cases that still need covering?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Niggles&lt;/strong&gt; - Something doesn't feel right. It looks like a hack or workaround. It might be fine, but is worth understanding the thinking behind this piece of code, just in case. At the very least, it might require a comment for the next person who comes along and also thinks this looks a bit odd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approach consistency&lt;/strong&gt; - The same area of code is now using two different approaches to do the same thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt; - Is this going to start having issues once it hits production?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Insufficient logging&lt;/strong&gt; - If this does look a little risky, is there enough logging in place to tell that it has gone wrong?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common pitfalls&lt;/strong&gt; - Lessons you've learnt the hard way. Perhaps a framework, library or shared component isn't as reliable or useful as it first appears. Perhaps there is a customer with a unique setup that requires additional care.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Re-inventing the wheel&lt;/strong&gt; - This task could have been completed using some pre-existing code. Perhaps it reimplemented some utility code or could have leveraged a shared library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge cases&lt;/strong&gt; - Will this code blow up or act in an undefined manner in some circumstances?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complexity&lt;/strong&gt; - The code is too hard to follow. Perhaps everything is crammed into a couple of giant classes or spread too thinly between too many loosely connected classes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The old way of doing things&lt;/strong&gt; - Best practice changes over an application's lifetime. Sometimes, you don't even realize you've done something using an outdated approach until the code review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public-facing problems&lt;/strong&gt; - Spelling mistakes in your API or not following the guidelines for your public API's URL structure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Inappropriate coupling&lt;/strong&gt; - This piece of code has added a link to somewhere it shouldn't. Perhaps a generic library component is now tightly coupled to one of its consumers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hard Issues
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Architectural problems&lt;/strong&gt; - Before you even get into the detail, the high-level approach taken is incorrect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smells and patterns&lt;/strong&gt; - The code is poorly structured or features a familiar code smell. It might be misapplying a pattern or could benefit from using a pattern. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Doesn't meet the requirements&lt;/strong&gt; - The code is solving the wrong problem or doesn't address what the business requested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gold plating&lt;/strong&gt; - The changeset includes too much extra code to provide future flexibility that you might not need. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introducing a new approach&lt;/strong&gt; - The pull request is introducing new patterns, libraries, or tools to the team or project. Is the approach sane and obvious? Is it documented? Do we all need to start doing things this new way? Is the new approach worth the additional way of doing something?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bugs&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Checks that can be spotted by tooling
&lt;/h2&gt;

&lt;p&gt;These low-hanging fruit seem to make up the bulk of code reviews' comments; they include spelling, style and lint issues. While comparatively minor, they are still worth addressing; a consistent visual appearance makes the code easier to read, and lint issues can disguise minor bugs. Automated tools can also point out locations with high complexity, low test coverage and duplicated blocks of code.&lt;/p&gt;

&lt;p&gt;For most of the languages, frameworks and IDEs you are using, there is an absolute wealth of quality tools that could automate the grunt checks for you. Rather than waiting for code review, these tools highlight issues during development, or just before check-in. For example, our team uses Prettier and ESlint for our front-end work, and a combination of ReSharper and Rosynator for our C# backend. We use ReSharper and a Visual Studio Code plugin for spellchecking, and SonarQube's static analysis, complexity analysis and code duplication checks in our CI/CD pipelines. Automating these checks means that the feedback is received during development, avoiding the back and forth of highlighting these issues during code review. As a reviewer, it means you're not getting bogged down on the minor issues and can focus on the rest of the code review. &lt;/p&gt;

&lt;p&gt;This isn't to say that you should stop looking for all lint and style issues completely. Sometimes scripts aren't run, or you find a new issue that isn't covered by your automation. Occasionally, someone installs a new tool that conflicts with your established style. But rather than just fixing the current instance of the problem, take steps to stop it from happening again. Tweak your style settings, add quality checking tools to check-in or your CI/CD pipelines, adopt new tools were appropriate, and ensure everyone's tools are running with the same configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Problems with the overall approach
&lt;/h2&gt;

&lt;p&gt;This category includes problems with the broad strokes of a feature, rather than the details of its implementation. The feature may not fit into your application's intended architecture, be using an outdated approach, or create a new approach when it should have leveraged existing code. The developer may have misunderstood the requirements and developed the wrong solution. Perhaps you are seeing something entirely new for the first time in this code review.&lt;/p&gt;

&lt;p&gt;For the big problems, code review happens too late in the development process. The work has already been done. To rework or to start again could be very time-consuming. The reviewers have a difficult choice: risk the current release while the work is redone, or let it through and suffer the additional technical debt. &lt;/p&gt;

&lt;p&gt;Rather than waiting until code review to identify these problems, you should be trying to identify these problems earlier in the feature's development. Do you have junior, intermediates or new hires that need a bit more guidance? Perhaps your team needs encouragement to ask questions when they're unsure, rather than guessing and delaying vital questions until the code review stage. If you're in an agile environment, perhaps you need more detail established upfront in sprint planning. Alternatively, you might need less detail in sprint planning where you might not have the full picture and instead spend that time on spikes, proofs of concept or design discussions. Two or three developers spending 15 minutes in front of a whiteboard can avoid days wasted going in the wrong direction. &lt;/p&gt;

&lt;p&gt;Of course, a balance needs to be struck. Most work doesn't need early checks, and it's valuable to keep moving, delaying some questions until code review. Getting bogged down and paralyzed with feeling you need to ask for permission kills your momentum, however, so does pausing development on your second task while you redo the first.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls
&lt;/h2&gt;

&lt;p&gt;Technical debt can cause enough problems on its own, but a dangerous variant is when it leaks out of the codebase and into the developers: when, rather than fixing a problem or trap, someone decides that it can be left to the code reviewers to prevent. These issues are often non-obvious, and developers will either forget or be unaware of the original problem. As a code reviewer, you might even start to build a checklist of frequent problems you need to ensure aren't present.&lt;/p&gt;

&lt;p&gt;Issues that are surprising, or require a good knowledge of the history of the application, are often also the hardest issues to spot while reviewing. Sometimes they can be severe enough to prompt substantial rework if they haven't been taken into account earlier. Having to search for hidden, but essential issues also places undue responsibility on the code reviewer. Particularly if you don't have QA, it becomes the code reviewers' responsibility to ensure all possible use cases are covered. In one former role, no one wanted to review particular areas of the code base as the chance of missing something critical was simply too high. &lt;/p&gt;

&lt;p&gt;As with the issues in the categories above, code reviews are not the right place to address these problems. It's seldom as simple as not having technical debt in the first place, but there are steps you can take to make unexpected issues more visible. A powerful method of achieving this is writing custom lint checks, so those common pitfalls are treated as compilation or lint errors by your IDE or build pipeline. You could have problematic test cases already set up in the development environment's test data to make them more prominent, or you could thoroughly cover the situation with its own set of integration tests. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As a reviewer, code reviews become so much easier when you can trust the big picture, not be distracted by a horde of tiny issues, and without having an ever-growing checklist of cases you must not let through. You can spend more time thinking about the future impact of this changeset, its maintainability and readability, and opportunities to teach and learn.&lt;/p&gt;

&lt;p&gt;As the reviewee, the right tooling lets you can address most of the potential issues before you even create the pull request. Clarifying your approach earlier lets you proceed with the confidence you are doing things correctly. Having edge cases surface themselves sooner means you can spend less time worrying about what you don't know. With more focused reviewers, the feedback you receive is more useful and informative and less likely to be deluges of minor issues, surprises or requests for significant changes.&lt;/p&gt;

</description>
      <category>codereview</category>
      <category>codequality</category>
      <category>productivity</category>
      <category>coding</category>
    </item>
    <item>
      <title>Bugs Aren't Just Mistakes</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Tue, 17 Dec 2019 17:06:57 +0000</pubDate>
      <link>https://dev.to/twynsicle/bugs-aren-t-just-mistakes-223b</link>
      <guid>https://dev.to/twynsicle/bugs-aren-t-just-mistakes-223b</guid>
      <description>&lt;p&gt;In my first job as a developer, the prevailing attitude from management was "If developers would just be more careful, we wouldn't have any bugs." The other prevailing attitude was, naturally, "Also, work as fast as possible." Unsurprisingly, in this environment, bugs were commonplace. This attitude came to a head when, despite months of warnings about how flawed our deployment process was, a junior developer took down production for half a day. Rather than seeing the error as caused by an underlying problem, management chose to harass a capable young developer out of his job. Not only was this a bad outcome for everyone involved, it meant that they also missed the opportunity to prevent the issue from happening again. &lt;/p&gt;

&lt;p&gt;Most companies aren't so punitive and petty. We don't assign blame or harass innocent and inevitable errors. But still, when all we do is fix a bug, close the ticket and stop thinking about it, we're treating the bug as a mistake. We're implicitly labelling the bug as a developer error.&lt;/p&gt;

&lt;p&gt;By leaving the underlying cause untreated, we end up fixing the same issues again and again. The bug count never decreases for long. Both the underlying issues and the symptoms start to pile up, making it harder to get your work released. Bugs consume your time and your reputation. &lt;/p&gt;

&lt;p&gt;The alternative is to view bugs as symptoms of a deeper cause: faulty process, lack of automation or tooling,  insufficient knowledge sharing, time spent in the wrong places. Each bug happened and made its way into the release candidate or release only because something let it though. &lt;/p&gt;

&lt;p&gt;It's challenging to get enough of a picture of all of the problems to be able to spot patterns. Sometimes you have a slow-leaking fault, causing significant, but occasional problems. It takes a shift in mindset from multiple people, switching from a reactionary stance to understanding the bigger picture of teams, process, tools and interactions. You need time, coordination between teams and room to experiment. Some problems need to be escalated, and gathering the data to make your case can be difficult. &lt;/p&gt;

&lt;h3&gt;
  
  
  Potentially useful categories for assessing bugs
&lt;/h3&gt;

&lt;p&gt;By first grouping bugs, you can start to understand the underlying problems. For example, if you find that missed requirements are a problem, you can then find out what stage they've been lost: did they not get discovered, or not get communicated to the team, or were they written down, but dev and QA didn't check for them?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Misconfiguration&lt;/strong&gt; - Some required configuration or setup never got deployed to an environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Missing/incorrect requirements&lt;/strong&gt; - The feature was developed appropriately according to specification, but initial requirements were incorrect. Perhaps the end-user is using the feature differently than anticipated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge cases&lt;/strong&gt; - The new work has not accounted for uncommon situations. Your application might throw exceptions or undefined behaviour occurs. You haven't considered customers with unusual setups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Flakey/problem components&lt;/strong&gt; - A part of your application has a high level of complexity and technical debt. It might be poorly understood. All changes to this component require a high level of care.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Third-party issues&lt;/strong&gt; - Your team could work in conjunction with a product actively maintained by an external team. Bugs in that team's work are causing problems in your release. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature requests as bugs&lt;/strong&gt; - The stakeholders might have changed their mind. There is a disagreement about the intended functionality. Someone is treating bug tickets as the fast lane for getting new work done. The business changed its mind about an accepted edge case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Merge issues&lt;/strong&gt; - The feature was working at the point in time it made it into master. But, by the time you cut the release, the feature had been broken by a subsequent changeset.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make sure you're solving the right problem
&lt;/h3&gt;

&lt;p&gt;Sometimes, the level of bugs gets so high that everyone recognizes something needs to be done. You gather everyone together and come up with a dozen different possible solutions.  However, not all solutions are valid or helpful. Limited viewpoints mean everyone sees a different part of the picture. Leaders are too far removed from the day-to-day realities to have a grasp of the problems. People come in with their own biases based on past negative experiences or future agendas. Some haven't been experiencing the problem, so don't understand how it could be an issue.&lt;/p&gt;

&lt;p&gt;With such a variety of differing opinions, how do you proceed? Without understanding the cause, or having any data, you end up going with the best guess. Choosing the wrong solution does little to help your problems. Worse, you could take resources away from the behaviours that are doing some good. The wrong solution costs you extra time and paralyzes teams with additional steps and processes.&lt;/p&gt;

&lt;p&gt;For example, automated UI tests don't help if you're not getting your requirements correct upfront. You could even end up coding the UI automation to look for the wrong result. UI automation is not a practical approach for enumerating all of your edge cases, and the time spent on UI automation is time not spent on unit tests that could be covering these situations. Increasing time spent testing each feature before you create each release won't identify missing environment configuration or settings. And it won't prevent other teams from accidentally rolling back your work. Doing away with branching isn't going to catch regressions in a product developed by an external team. Classifying a bug as an accepted edge case and setting up infrastructure, process and testing to ensure it doesn't get any worse could be more time consuming than just fixing the bug.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trimming the fat
&lt;/h3&gt;

&lt;p&gt;Treating bugs as issues that could happen at any time for any reason requires a high level of caution. In response to past quality issues, the development team ends up with a lot of added checks, processes, additional testing and signoffs.&lt;/p&gt;

&lt;p&gt;Rarely do we go back and assess whether the additional effort has been adding value. Nor do we acknowledge the impact it has on teams in terms of momentum, throughput, time lost, context shifting, and having work stuck in long-lived branches waiting for signoff. &lt;/p&gt;

&lt;p&gt;The work we do to understand better where our issues are coming from also means that we can understand where we are not having issues. On top of working to prevent the causes of bugs we are having, we can strip away the efforts to prevent bugs that are giving us no benefit.&lt;/p&gt;

&lt;h3&gt;
  
  
  The solutions might already be available
&lt;/h3&gt;

&lt;p&gt;Six months before the faulty deployment taking down our application for half a day, I had built a lightweight tool for deploying changes to production according to the company's business rules: a couple of Powershell scripts wrapped in a user-friendly UI. The tool deployed in a tenth of the time could have helped other teams avoid the underlying issues in a complicated deployment process.&lt;/p&gt;

&lt;p&gt;Once you understand the types of bugs that are slipping through, you can see which of your teams just aren't having those same problems everyone else is. They might be doing something that helps pin down requirements upfront, or added a step to catch configuration issues early. They might have adopted a tool, technique or set of patterns to help them. Identifying and spreading something already in use can be a lot easier and safer than trying to adopt a new solution.&lt;/p&gt;

</description>
      <category>management</category>
      <category>productivity</category>
      <category>testing</category>
    </item>
    <item>
      <title>3 Patterns for Reducing Duplication in Your Unit Tests
</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Mon, 18 Nov 2019 17:55:56 +0000</pubDate>
      <link>https://dev.to/twynsicle/3-patterns-for-reducing-duplication-in-your-unit-tests-34g5</link>
      <guid>https://dev.to/twynsicle/3-patterns-for-reducing-duplication-in-your-unit-tests-34g5</guid>
      <description>&lt;p&gt;Our team used to have a lot of difficulties with our unit tests. They were slow to write, slow to run and time-consuming to maintain. The tests were fragile and prone to breaking. Small changes to our code could lead to hours fixing tests all across our entire suite. The tests were inconsistently designed and required many different approaches to fix. &lt;/p&gt;

&lt;p&gt;Our unit tests had become such a hassle that when developing new features, we were spending more time fixing up existing tests than we spent creating new tests.&lt;/p&gt;

&lt;p&gt;Realizing that we needed to turn this around, and after some investigation, we determined that the primary cause of our troubles was code duplication. Our tests were poorly structured and too concerned with creating the same objects over and over again. We researched, discussed and experimented and settled on three patterns to help us improve our unit test setup: the Object Mother, Test Class Builder and Test Fixture. &lt;/p&gt;

&lt;p&gt;The following example demonstrates the difference these patterns made to our unit tests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Before&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;AddToCart_AddingMultipleItems_TotalPriceIsCorrect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"customerId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;LoyaltyStatus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;None&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;product1&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Product&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Colour&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;product2&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Product&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Colour&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;discountService&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Mock&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDiscountService&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
    &lt;span class="n"&gt;discountService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Setup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetDiscounts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IsAny&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&amp;gt;())).&lt;/span&gt;&lt;span class="nf"&gt;Returns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cartRepository&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Mock&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ICartRepository&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
    &lt;span class="n"&gt;cartRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Setup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProductToCart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IsAny&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&amp;gt;())).&lt;/span&gt;&lt;span class="nf"&gt;Returns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cart&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Cart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;customer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;discountService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
        &lt;span class="n"&gt;cartRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Mock&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ILogManager&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;().&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;product1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;product2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;Assert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AreEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TotalPrice&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// After&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;AddToCart_AddingMultipleItems_TotalPriceIsCorrect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cart&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Fixture&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;GetSut&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProductBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;WithPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProductBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;WithPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

    &lt;span class="n"&gt;Assert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AreEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;10.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TotalPrice&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  What was getting duplicated?
&lt;/h2&gt;

&lt;p&gt;From our example above, we can identify the following three categories of objects whose creation was being duplicated across test cases. Each type fulfils a different role in our tests and correspondingly led to a separate creational pattern being adopted to assist its role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Relevant objects&lt;/strong&gt; - These objects contain at least one property that is relevant to the test scenario. In our example, we want to specify each product's price within the test so that the verification is evident to the reader.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Irrelevant objects&lt;/strong&gt; - These objects have no bearing on the test; however, we often end up needing to create them as they are required parameters to construct relevant objects or call methods. Examples in the above test would be the &lt;code&gt;Customer&lt;/code&gt;, &lt;code&gt;Size&lt;/code&gt; and &lt;code&gt;Colour&lt;/code&gt; objects.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;system under test&lt;/strong&gt;, often abbreviated to the "SUT", is the class that we are testing. In our example, this is the &lt;code&gt;Cart&lt;/code&gt; class, and much of the test method is spent constructing it and setting up its dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The problems with duplication in test methods
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Duplication hides intent
&lt;/h4&gt;

&lt;p&gt;Our example test is 11 lines, but only 3 of those lines are to do with our test scenario—adding two objects to the cart and asserting the total price. The other 8 lines are all spent setting up our test objects. These additional lines make it harder to find meaningful lines amongst the noise. Further, duplication makes it difficult to determine the difference between the tests in the file. It is easy to have as little as a one-character difference between two test's setup phases result in opposite outcomes in the verify phase. You want to make it easy for future maintainers to figure out why some tests are passing while others are failing.&lt;/p&gt;

&lt;p&gt;If you are creating your objects within your test cases, you aren't taking advantage of wrapping object creation in intent-revealing names. Which of the following is easier to understand?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="k"&gt;event&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DateTime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Now&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"event"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// or &lt;/span&gt;
&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="k"&gt;event&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_eventMother&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateEventWithoutStaffMember&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  The tests become more concerned with creating objects than testing functionality
&lt;/h4&gt;

&lt;p&gt;Each unit test should be testing a minimum isolated unit; keeping them simple, fast and reducing false positives. When your tests are responsible for constructing multiple required objects and their dependencies, they know too much. Each created object, dependency and 'new' keyword is a potential point of failure. Your tests may fail to compile or fail to run because of an unrelated change to a dependency. A small change might require hours updating tests across your entire test suite. While a feature is still in development, the tests undergo constant churn as your objects are in flux.&lt;/p&gt;

&lt;h4&gt;
  
  
  New tests are hard to write
&lt;/h4&gt;

&lt;p&gt;Excessive duplication is indicative of test code that is lacking structure. Developers write tests by finding a similar test from which to copy-paste object creation and dependency setup. You end up spending more time thinking about how to set up your test than the case you are trying to test.&lt;/p&gt;

&lt;h4&gt;
  
  
  Duplicated code is prone to variation
&lt;/h4&gt;

&lt;p&gt;As the test cases get updated over time, the code that started from copy-pasting between test methods starts to vary. Not only do future alterations need to be performed in too many places, but each of those locations requires a separate approach.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: Object Mother
&lt;/h2&gt;

&lt;p&gt;The simplest of the three patterns, the Object Mother pattern is a collection of test-ready objects for typical scenarios your classes can be configured in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ObjectCreation/Mothers/CustomerMother.cs&lt;/span&gt;
&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CustomerMother&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;ICustomer&lt;/span&gt; &lt;span class="nf"&gt;CreateCustomer&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"firstName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"lastName"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;ICustomer&lt;/span&gt; &lt;span class="nf"&gt;CreateCustomerWithSilverLoyaltyStatus&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"firstName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"lastName"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;LoyaltyStatus&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;LoyaltyStatus&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Silver&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Tests/CartTests.cs&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;TestClass&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CartTests&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;CustomerMother&lt;/span&gt; &lt;span class="n"&gt;_customerMother&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;CustomerMother&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;TestMethod&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;AddToCart_AddingMultipleItems_TotalPriceIsCorrect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;customer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_customerMother&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateCustomer&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="c1"&gt;// ... snip ...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  When to use it
&lt;/h4&gt;

&lt;p&gt;The Object Mother pattern has two usages: firstly, when we need an object as a required parameter, but its contents are irrelevant to our test, and secondly, when we want a variation of an object that can be simply described and doesn't require further customization. For example, &lt;code&gt;_categoryMother.CreateCategoryWithSimpleDiscount()&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Provides a single location for creating objects across the test suite, promoting reuse.&lt;/li&gt;
&lt;li&gt;Reduces the number of locations your tests construct objects.&lt;/li&gt;
&lt;li&gt;Allows object creation to be moved behind intent-revealing names.&lt;/li&gt;
&lt;li&gt;Indicates that the returned objects aren't significant to the test.&lt;/li&gt;
&lt;li&gt;Allows you to name and reuse common or important edge cases.&lt;/li&gt;
&lt;li&gt;Can be a form of documentation enumerating the possible ways that an object, or collection of objects, can be set up.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Nesting
&lt;/h4&gt;

&lt;p&gt;Object mothers can be nested, allowing you to set up more complicated scenarios. In this example, we want a category with a discount, but we still aren't concerned about the contents of that discount.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ObjectCreation/Mothers/DiscountMother.cs&lt;/span&gt;
&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DiscountMother&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IDiscount&lt;/span&gt; &lt;span class="nf"&gt;CreateDiscount&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Discount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"discountId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ObjectCreation/Mothers/CategoryMother.cs&lt;/span&gt;
&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CategoryMother&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IDiscount&lt;/span&gt; &lt;span class="nf"&gt;CreateCategory&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Category&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"categoryId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;IDiscount&lt;/span&gt; &lt;span class="nf"&gt;CreateCategoryWithSimpleDiscount&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;discountMother&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DicountMother&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Category&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"categoryId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;Discount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;discountMother&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;CreateDiscount&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Pattern 2: Test Object Builder
&lt;/h2&gt;

&lt;p&gt;Test Object Builders are responsible for building instances of the object with default properties while allowing relevant properties to be overridden using a fluent syntax. They have similarities to the builder pattern, except that they also provide a default value for every property.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ObjectCreation/Builders/ProductBuilder.cs&lt;/span&gt;
&lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ProductBuilder&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;decimal&lt;/span&gt; &lt;span class="n"&gt;_price&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;_categoryId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;ProductBuilder&lt;/span&gt; &lt;span class="nf"&gt;WithPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;decimal&lt;/span&gt; &lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_price&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;price&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;ProductBuilder&lt;/span&gt; &lt;span class="nf"&gt;WithCategoryId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;categoryId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_categoryId&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;categoryId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Product&lt;/span&gt; &lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Product&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="s"&gt;"id1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="n"&gt;_price&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;_categoryId&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="s"&gt;"categoryId"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Size&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; 
            &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Colour&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Tests/CartTests.cs&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;TestClass&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CartTests&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;TestMethod&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;AddToCart_AddingMultipleItems_TotalPriceIsCorrect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// ... snip ...&lt;/span&gt;
        &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProductBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;WithPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
        &lt;span class="n"&gt;cart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProduct&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProductBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;WithPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
        &lt;span class="c1"&gt;// ... snip ...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  When to use it
&lt;/h4&gt;

&lt;p&gt;Like the Object Mother, the Test Object Builder is for creating simple objects. However, the Test Object Builder pattern is better suited for when you want to set properties on the resultant object.&lt;/p&gt;

&lt;p&gt;In our cart example, we want to specify the price of each item in the test, rather than have it hidden away in a helper method. Having the value specified makes the test scenario clear to the reader, allowing them to follow along &lt;code&gt;5.00 + 5.00 = 10.00&lt;/code&gt;. The Test Object Builder pattern allows us to specify just the price, without having to configure any of the other properties which aren't relevant to what we are testing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Makes it clear what properties of the object are meaningful to the test, clarifying intent.&lt;/li&gt;
&lt;li&gt;Hides properties of the object that aren't relevant.&lt;/li&gt;
&lt;li&gt;Provides a single location for creating each object across all tests, promoting reuse.&lt;/li&gt;
&lt;li&gt;Can start simple and be extended as you need to customize additional properties, without affecting existing test classes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Object Mother vs TestObjectBuilder?
&lt;/h4&gt;

&lt;p&gt;On the face of it, these two patterns are very similar; both are for creating simple objects and moving that creation to a single location. The difference lies in how much you need to customize the objects you are creating. For example, &lt;code&gt;_categoryMother.CreateCategoryWithId2AndParentId1()&lt;/code&gt; starts getting very verbose and means you need to start creating many Object Mother methods.&lt;/p&gt;

&lt;p&gt;On the other hand, in straightforward cases, you could skip using &lt;code&gt;_productMother.CreateProduct()&lt;/code&gt; and use &lt;code&gt;new ProductBuilder.Build()&lt;/code&gt; directly. The Object Mother excels when you have multiple known setups, or setups involving multiple classes, though often it can be just a matter of personal taste.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3: Test Fixture
&lt;/h2&gt;

&lt;p&gt;The Test Fixture is a pattern for creating the class we are testing, and its dependencies. The pattern moves the setup into a private class and exposes methods to allow the tests to customize the dependencies.&lt;/p&gt;

&lt;p&gt;In the following example, the creation of the cart class and mocks of its dependencies have shifted from the test method into the &lt;code&gt;Fixture&lt;/code&gt; class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Tests/CartTests.cs&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;TestClass&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CartTests&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;CustomerMother&lt;/span&gt; &lt;span class="n"&gt;_customerMother&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;CustomerMother&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;TestMethod&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;AddToCart_WithApplicableDiscount_TotalPriceIsCorrect&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;product&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ProductBuilder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;WithId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"productId"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;WithPrice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;5.00&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;appliedDiscount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;ProductDiscountBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithProductId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"productId"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;WithFlatDiscount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1.00&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;fixture&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Fixture&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="n"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WithGetDiscountResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDiscount&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;appliedDiscount&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="n"&gt;Cart&lt;/span&gt; &lt;span class="n"&gt;sut&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fixture&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetSut&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

        &lt;span class="c1"&gt;// ... snip ...&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Fixture&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;BaseTestFixture&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ICart&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;IList&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDiscount&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_discountResponse&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Array&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Empty&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDiscount&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;

        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;WithGetDiscountResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;IList&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDiscount&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;discountResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_discountResponse&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;discountResponse&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="n"&gt;ICart&lt;/span&gt; &lt;span class="nf"&gt;GetSut&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;discountService&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Mock&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;IDiscountService&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
            &lt;span class="n"&gt;discountService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Setup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetDiscounts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IsAny&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&amp;gt;())).&lt;/span&gt;&lt;span class="nf"&gt;Returns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;discountResponse&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

            &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;cartRespository&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Mock&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ICartRespository&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
            &lt;span class="n"&gt;cartRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Setup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;AddProductToCart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;It&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;IsAny&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&amp;gt;())).&lt;/span&gt;&lt;span class="nf"&gt;Returns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;Cart&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;discountService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cartRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;Mock&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;ILogManager&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;().&lt;/span&gt;&lt;span class="n"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  When to use it
&lt;/h4&gt;

&lt;p&gt;For creating the class that each file's test methods are testing.&lt;/p&gt;

&lt;h4&gt;
  
  
  Advantages
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Moves a significant source of duplicated code out of each test method.&lt;/li&gt;
&lt;li&gt;All setup of the class under test is in a single location, so any changes to the constructor or dependencies only need to happen in one location.&lt;/li&gt;
&lt;li&gt;Places modifications to the SUT's dependencies behind &lt;code&gt;descriptive and meaningful phases&lt;/code&gt;/&lt;code&gt;intent-revealing names&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The setup of common or complex dependencies can be shared by multiple fixtures by being placed in a base fixture class.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disadvantages
&lt;/h2&gt;

&lt;p&gt;All three patterns share the same disadvantage: they all introduce additional boilerplate and structure that takes more time to set up at first. Overall, however, the benefits they offer to maintainability are a net gain. &lt;/p&gt;

&lt;p&gt;Despite the initial overhead of creating boilerplate, once our team adopted these patterns, we found that our tests became more straightforward to maintain and write. Consequently, we started writing more tests and spending less time debugging broken tests. &lt;/p&gt;

</description>
      <category>testing</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>A guide to reducing development wait time Part 1: Why?</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Wed, 06 Nov 2019 17:02:33 +0000</pubDate>
      <link>https://dev.to/twynsicle/a-guide-to-reducing-development-wait-time-part-1-why-29n9</link>
      <guid>https://dev.to/twynsicle/a-guide-to-reducing-development-wait-time-part-1-why-29n9</guid>
      <description>&lt;p&gt;As developers, one of our primary tasks is waiting. We wait for our code to compile, our tests to run, to verify our work, we wait for our application to deploy and load, and to navigate to the page we are changing.&lt;/p&gt;

&lt;p&gt;I have joined many teams that were either stuck with or content with a lot of waiting. Often, I found that these wait times very fixable and I was able to make significant improvements. This series will cover some of the strategies that I use when trying to improve build times and workflows. But to start with, in part 1, I want to address the question 'why we should care about slow builds and workflows?'.&lt;/p&gt;

&lt;p&gt;Spending time to save a few seconds might not seem worthwhile. However, those gained seconds can slowly accumulate to get you over some thresholds that can trigger a dramatic change in how you work. You can reduce context shifts, enter a flow state, free up your mental space to focus on your actual tasks, and make the best of your limited amount of time each day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thresholds
&lt;/h3&gt;

&lt;p&gt;In my experience, the time we spend waiting for our builds to compile and our application to load can fall into three distinct categories. In the first category, the wait is unobtrusive; we have an unbroken chain of thought and we switch seamlessly between making changes and trialling them. In the second category, we are waiting long enough that we notice, and our train of thought is broken. We notice the boredom and feel the temptation to context shift. In the final category, we predict that the wait will be too long and alt-tab to find something else to do while we wait.&lt;/p&gt;

&lt;p&gt;The size of these three thresholds varies from person to person, as well as from day to day. I tend to find that I will notice a wait longer than 15 seconds and feel the urge to context shift around 45 seconds.&lt;/p&gt;

&lt;p&gt;Each category forms a tipping point of sorts; small improvements add up to a much more dramatic shift.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reduce context shifting
&lt;/h3&gt;

&lt;p&gt;When faced with a long wait, we tend to start doing multiple tasks in parallel. We might start trying to chip away at the rest of the tasks we have to do, or perhaps just staving off boredom. We'll alternate between writing code and reading through a pull request, email or Slack, or checking your preferred blog site. Usually, switching to a second task is a false economy; we're much worse at multitasking than we think we are. We lose energy each time we need to stop and put our heads back into a different task. We'll feel a lot more exhausted at the end of a day full of context shifting.&lt;/p&gt;

&lt;p&gt;Distraction also makes it challenging to realise that the action we were waiting on has completed. Even once we do recognise we can return to our original task, we're conflicted, what do we finish first? Now, our multi-step process to check our change, interspersed with waiting, takes much longer than the sum of its parts as we keep losing and regaining focus. Build, wait, deploy, wait, load, wait, login, wait, navigate, wait, check your change. *repeat*. The more steps you need to wait for, the more times you context shift and lose that bit more energy.&lt;/p&gt;

&lt;p&gt;We don't wait until we've hit our context shifting threshold before we deciding to pick up a second task. Instead, we change activities based on our prediction that the upcoming wait will be too long. This means that when build times are variable, we will decide to context switch based on the worst-case scenario for that build.&lt;/p&gt;

&lt;p&gt;As wait times reduce, the temptation to context shift reduces. We might get to the point where we still notice the wait but can tolerate it without switching tasks. All of those minor background tasks are still important, but it's better when we can dedicate the time to them, rather than performing them in the background alongside other tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enter flow
&lt;/h3&gt;

&lt;p&gt;Commonly referred to as being 'in the zone', a flow state occurs when the task we are doing has just the right balance of challenge and interest. Not so easy that we get bored, but not so difficult that we experience anxiety and uncertainty. During a flow state, we become so engaged in what we are doing that we stop noticing the passage of time and become solely focused on the task at hand. Being in a flow state is both a pleasant and productive experience.&lt;/p&gt;

&lt;p&gt;Commonly associated with video games, it is also achievable while writing code. However, it does takes some effort to arrange and enter. If you don't have all the information at hand or don't know how to proceed, your task becomes too difficult. If you're writing too much boilerplate, the task becomes too easy. Likewise, if you're having to keep stopping and waiting for builds, you get bored, lose engagement and drop out of flow state. Idle periods are an obstacle to maintaining a good flow.&lt;/p&gt;

&lt;p&gt;There are already a lot of good articles about entering and maintaining a flow, so I won't repeat them here, except to say that wait times are an obstacle to flow and that flow is a reward for creating a smooth workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mental Overhead
&lt;/h3&gt;

&lt;p&gt;Verifying a change can be long, not just due to waiting for compilers and loading, but because the steps involved are too manual. Rather than enshrine the extra tasks and decisions in code, the development team has to perform the actions themselves. I like to think of this as 'mental debt'; like technical debt, it gathers over time and has a cost that needs to be paid down. However, rather than living in your codebase, mental lives in your developers' memory and your team's processes.&lt;/p&gt;

&lt;p&gt;These manual steps can take a lot of forms: config files to be tweaked, settings in your admin page to update, tables, caches and logs to blow away, scripts to run, files to copy, special use cases to remember to check, deciding which set of test data to use.&lt;/p&gt;

&lt;p&gt;This overhead causes us a lot of problems. It slows down our builds; it consumes our focus, memory and other mental resources; it increases the number of decisions we need to make; it makes it harder to onboard new staff and makes it harder to change an application we haven't touched for months. The complexity can mean that we can pull the wrong lever or forget to turn a dial, requiring you to restart the build.&lt;/p&gt;

&lt;p&gt;The more you can do without thinking or needing to make decisions, the more space and energy you have to focus on work that matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't waste your good hours
&lt;/h3&gt;

&lt;p&gt;The focus and attention we require for writing code and solving problems consume a lot of energy. We might be at the office for 8 hours, but we usually can't sustain our pace for the entire duration. We may have as low as 3 or 4 hours worth of quality effort available per day. The remainder we fill with planning, requirements gathering, communication, less challenging programming, learning, and all the other bits and pieces that go into being a software engineer.&lt;/p&gt;

&lt;p&gt;Waiting means that not only are we draining our limited reserves by context switching, but we are losing time from the best parts of the day. If your build times are costing you half an hour a day, that's not coming out of your eight-hour workday, but from your much shorter and more valuable high-quality work period.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Slow builds snowball. Once a build is slow, it's easy to accidentally more time to your build. Adding 5 seconds to a 45-second build might not seem so bad until it happens again, and again. &lt;/li&gt;
&lt;li&gt;All improvements you make can be shared across everyone else that is also working on that project.&lt;/li&gt;
&lt;li&gt;Waiting for builds to complete is frustrating and tedious.&lt;/li&gt;
&lt;li&gt;Faster builds mean you can check your work more frequently. Rather than try to figure out which of the half-dozen batch of changes has broken everything, your most recent fix will be at fault.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Sometimes slow builds are part of being a developer: legacy applications, parts of our ecosystem that rarely get touched, verifying complicated edge cases. Too often, however, I see teams accept slow builds, test runs, workflows and processes as part of life. Waiting does not have to be an integral part of being a developer. A little bit of investigation and creativity can have considerable benefits to our quality of life, engagement and our productivity. We can reduce context shifting, work within a flow state, reduce our mental overhead, and make the best of our limited human capacity.&lt;/p&gt;

&lt;p&gt;In part 2, without being specific to any technology or platform, I will be covering the broad approaches I have taken in the past to reduce waiting.&lt;/p&gt;

</description>
      <category>productivity</category>
    </item>
    <item>
      <title>Our team's trouble with hand-written automated UI tests</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Fri, 30 Aug 2019 19:07:08 +0000</pubDate>
      <link>https://dev.to/twynsicle/our-team-s-trouble-with-hand-written-automated-ui-tests-520a</link>
      <guid>https://dev.to/twynsicle/our-team-s-trouble-with-hand-written-automated-ui-tests-520a</guid>
      <description>&lt;p&gt;Before you can release a new feature, you need to make sure that your existing features still work. You give each release to the QA team to perform manual regression testing. The testers/QA team have their scripts and spend a couple of days stepping through them on the hunt for regressions and bugs. Over time, you add new features, the scripts grow in size, and so does the time it takes to perform manual tests. Your reliance on manual testing starts to become problematic, and so you start looking for alternatives. Automated UI testing sounds appealing. It seems to promise that you can keep running your same regression test scripts, but replace the hands and eyes of a human with those of an automation framework.&lt;/p&gt;

&lt;p&gt;Everyone starts to get really excited about automated UI testing.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Manual regression testing is a tedious task that everyone is happy to see replaced.&lt;/li&gt;
&lt;li&gt;It frees up the QA team's time for ad-hoc and exploratory testing.&lt;/li&gt;
&lt;li&gt;When the manual regression testing step takes so much time to complete, small delays can put your release at risk. Perhaps testing needs to be restarted, or the start time is pushed back a few days, or your regression environment needs to share two different releases at the same time.&lt;/li&gt;
&lt;li&gt;Your release cadence is limited by manual regression testing. Two or more days of manual regression testing means you can, at best, release twice a month. Moreover, you'll need to release everything in one go. It's all or nothing, since you need to test everything together.&lt;/li&gt;
&lt;li&gt;Automated tests are tangible. You can have them running on devices on your wall and show them off to visitors.&lt;/li&gt;
&lt;li&gt;Automation means that regression testing can happen as you develop, inside your sprint, reducing needing to throw work over a wall and wait days for the results.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You could purchase a commercial tool that helps you create and manage your tests. Or, perhaps your framework of choice comes with a built-in automation solution. Great, this article isn't for you. Alternatively, you might be considering using tooling like Selenium or Appium to hand-write your tests. This is the approach my team was given, and after several months of work, we abandoned the tests. They had not proven to be a good fit for our test suites, our architecture, our team, or our expectations. Through this process, we learned many lessons and encountered many problems that should have been considered upfront.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does it fit your manual regression suite?
&lt;/h3&gt;

&lt;p&gt;Be realistic about what automated testing can cover, it won't be your entire manual regression suite. Some parts are going to be too complicated or too time-consuming to be worthwhile automating.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long chains of actions that cannot be split up. The unreliability of UI tests can make it challenging to get all the way through in one run.&lt;/li&gt;
&lt;li&gt;Testing interaction with a second application. &lt;/li&gt;
&lt;li&gt;Checking the output of pdf and other generated files.&lt;/li&gt;
&lt;li&gt;Testing tasks that interact with Windows or the Windows file system.&lt;/li&gt;
&lt;li&gt;Tests where subsequent runs will have different results. Will your test run be affected by the results of previous test runs? Manual regression might happen once a fortnight, while automated UI tests might retry the same test multiple times a minute or hour, increasing your chance of collisions.&lt;/li&gt;
&lt;li&gt;Tests where your application could be left in an inconsistent state if the test run fails or crashes halfway. Would this need human intervention to remedy?&lt;/li&gt;
&lt;li&gt;Where you don't have sufficient control over the data being displayed in a section of the app, making it difficult to set up test preconditions. Do the testers have to hunt around the app looking for matching data, rather than being able to create or directly navigate to that scenario?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Be careful about becoming overly dogmatic, forcing UI automation tests where they don't fit. Not only will they be hard to write, they will end up unreliable and difficult to maintain. Be realistic about what can be automated before you start. Whoever is automating the tests needs the freedom to say no.&lt;/p&gt;

&lt;h3&gt;
  
  
  Does it fit your application's architecture?
&lt;/h3&gt;

&lt;p&gt;Depending on how your application is structured and how it has grown, you might find automation takes an unreasonable amount of time to set up.&lt;br&gt;
UI automation is one part writing the test steps and one part setting up the test infrastructure. If you follow the Page Object Model pattern, then for each page and control in your application, you create models so your tests can find and interact with the elements on that page or control. The amount of infrastructure code you need to write depends on the project. Do you have a few different pages taking many different inputs, or many workflows spread across a lot of specialized pages? Do you have a small library of controls that you reuse, or is every control bespoke as your UI has changed over time? How you've developed your application up until this point determines how much effort you need to put into writing the test infrastructure. In turn, this impacts how long it takes to write your automated UI tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are your automated tests going to find bugs and regressions?
&lt;/h3&gt;

&lt;p&gt;Before you start, check whether automated UI tests are going to find the regressions that you expect. You should have a record of bugs previously found during regression and in production. How many of them do you plan to catch with automated UI tests?&lt;/p&gt;

&lt;p&gt;There are many cases that UI tests won't find.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Issues in steps that are not included in the paths laid out in your manual regression scripts. How often are bugs found because someone is testing that area of the app, rather than being explicitly included in a test step?&lt;/li&gt;
&lt;li&gt;When both a feature and its tests are incorrect.&lt;/li&gt;
&lt;li&gt;Bugs in edge cases or uncommon scenarios.&lt;/li&gt;
&lt;li&gt;Anything that is caught by your unit and integration test layers.&lt;/li&gt;
&lt;li&gt;Any action whose result isn't visible in your application. Avoid trying to hide data in your application just for your automation tests.&lt;/li&gt;
&lt;li&gt;Visual errors.&lt;/li&gt;
&lt;li&gt;Performance problems.&lt;/li&gt;
&lt;li&gt;Any test cases that end up being too complicated and challenging to automate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What roles are automated tests are going to have in your process? How can they best support your QA team and regression process? Perhaps, rather than finding bugs, you aim to free up QA time. You could skip the areas that QA covers when performing more exhaustive ad-hoc and exploratory testing. What you expect automated UI tests to find, should inform what areas you choose to include and how many you plan to write. &lt;/p&gt;

&lt;h3&gt;
  
  
  How much are you expecting to spend?
&lt;/h3&gt;

&lt;p&gt;It is easy to compare the time you spend writing tests compared to the time you might save. Automating one of our features took over 200 hours to save 20 minutes each release. Given that automated tests cost much more time than they save, are the benefits you envision gaining worth it? Are they going to take so long to create that you will never get all of the way through creating the test suite?  &lt;/p&gt;

&lt;h3&gt;
  
  
  Who will write the tests?
&lt;/h3&gt;

&lt;p&gt;You might hope that by using the Page Object Model pattern, the developers can write the test infrastructure which QA can then use to write the tests. Our experience didn't pan out that way, with the developers needing to write both the infrastructure and the tests.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your test infrastructure might not be reusable across multiple tests. Without reuse, you end up writing the support code at the same time you write the tests.&lt;/li&gt;
&lt;li&gt;Writing the tests might also require many updates to your application.&lt;/li&gt;
&lt;li&gt;The automation framework doesn't provide enough information to know whether the test failed because of the infrastructure or the tests.&lt;/li&gt;
&lt;li&gt;If your QA team lacks experience with coding or automation, you might not be able to make the framework simple enough to use. &lt;/li&gt;
&lt;li&gt;The tests require too much knowledge of the internals of the application.&lt;/li&gt;
&lt;li&gt;Test flakiness causes the developers to keep returning to fix up the test infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you are working on your proofs of concept for automated tests, involve whoever is intended to extend and maintain the tests. Ensure that what you are making is appropriate for their skill set and understanding of your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Do you have a clean dataset to test against?
&lt;/h3&gt;

&lt;p&gt;When you start, you might use the same database that you use for your regular development activities. However, before long, you will yourself spending more and more time working around your dataset.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to find preconditions rather than having them already set up, or they should need to be easy to create.&lt;/li&gt;
&lt;li&gt;Your UI changes as more data is created. For example, extra data pushes an element off the page, and tests fail because they cannot interact with it.&lt;/li&gt;
&lt;li&gt;The same test might be rerun within the same minute. You need to check whether an element is from the current test or a previous test run.&lt;/li&gt;
&lt;li&gt;Tests fail because a test user has entered an unexpected state, requiring either manual intervention or the tests pre-filtering users in each invalid state.&lt;/li&gt;
&lt;li&gt;Sweeping changes to your dataset change the data you had been working with. For example, you might periodically clear your developer database or refresh it with data imported from another system.&lt;/li&gt;
&lt;li&gt;Simultaneous test runs or developers using the same database lead to unexpected interactions and test failures.&lt;/li&gt;
&lt;li&gt;The temptation sneaks in to be able to run the tests against multiple environments. Against the dev database during development and the regression environment during signoff. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of your tests may have various preconditions that need to be set up. Relying on the automated tests to set up their preconditions will turn each test into a long chain of actions. Not only will these extra steps make the test slower to write and run, but they will make them flakier, and make it harder to track down the failure points. What if you can't create your test scenario's preconditions from within your app? Do the tests need to hunt around the app looking for appropriate data?&lt;br&gt;
With a clean dataset, you can have known test conditions and known test users, similar to how you use an object mother in unit tests. &lt;/p&gt;

&lt;p&gt;You want a database that can be reset and populated with fixed data. If you don't already have this, then you will require a lot of new infrastructure: a new database, a tool for populating valid test data, APIs that point to this new database, and build pipelines for deploying this environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your UI framework and components are doing more than you realize
&lt;/h3&gt;

&lt;p&gt;Each of your tests is going to need to account for everything that each UI component can do.&lt;br&gt;
Take the following example where we don't reset the database between tests, and we want to click on an element in a list.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We click on the element; the test passes.&lt;/li&gt;
&lt;li&gt;Subsequent runs add more items to the list, pushing the target offscreen; we need to update our tests to jump to the element before clicking.&lt;/li&gt;
&lt;li&gt;The list then gets so long that UI virtualization kicks in. Our target no longer exists on the page. We can't jump to it and instead need search through the list by slowly scrolling through it.&lt;/li&gt;
&lt;li&gt;Duplicates appear in the list; you need to figure out which element is from the current test run.&lt;/li&gt;
&lt;li&gt;Another element grows in size, pushing the entire list off-screen; you need to scroll to the list before interacting with it.&lt;/li&gt;
&lt;li&gt;A previous test run failed to complete, and the test entity is left in a state that hides the entire list.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each UI control in your application will require a similar process. You might find yourself revisiting tests for weeks after you create them as you hit edge cases in your UI controls that you hadn't expected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Flakiness
&lt;/h3&gt;

&lt;p&gt;Automated tests fail frequently, and often, you're not going to know why.&lt;br&gt;
Failures can happen for many reasons&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The automation framework you are using fails to find an element onscreen.&lt;/li&gt;
&lt;li&gt;The automation framework fails to recognise that your application has started.&lt;/li&gt;
&lt;li&gt;The test driver fails to connect.&lt;/li&gt;
&lt;li&gt;You encounter an edge case in a UI component.&lt;/li&gt;
&lt;li&gt;An element is pushed off-screen so your automation framework cannot interact with it.&lt;/li&gt;
&lt;li&gt;Timing issues: perhaps a mask doesn't quite finish hiding before the test attempts to click an element.&lt;/li&gt;
&lt;li&gt;The tests work differently at different screen sizes and resolutions as different elements are on or off the screen.&lt;/li&gt;
&lt;li&gt;All of the issues mentioned previously with not having a clean, isolated database instance for each test run.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We were using Appium and WinAppDriver, and for most of the failures, we were given no useful error message, no logs and no stack traces. We had tests failing because an element couldn't be found, but no way of telling which element was at fault. Worse, since the failures were intermittent, and could be device or environment specific, it took a long time to determine the cause.&lt;/p&gt;

&lt;p&gt;One solution to flaky tests is to run each test until it passes. This poses several problems: the duration of your test runs gets longer, making it harder to get timely feedback from your tests. Secondly, it makes it harder to write new tests, and you might be waiting ten minutes or more to test a single change. Ideally, you would address flakiness whenever it increases. Track test flakiness over time, and group the vague error messages you receive. Knowing when flakiness started can be an essential clue to tracking it down when you don't have useful logs. &lt;/p&gt;

&lt;p&gt;To tackle flakiness, we resorted to maintaining a long list of everything that could cause flakiness — all of the edge cases and UI interactions between our test suite and our application. Not only did creating this take a long time and a lot of trial and error, but it also increased the learning curve for sharing the test suite with other developers. &lt;/p&gt;

&lt;h3&gt;
  
  
  Refactoring is hard
&lt;/h3&gt;

&lt;p&gt;Automated UI tests are difficult to refactor. The tests can take hours to run, making it hard to get feedback for sweeping changes. Some tests might be heavily reliant on carefully arranging timings and break as soon as anything is changed. &lt;br&gt;
With automated testing likely being new to your team, you risk ending up with many different approaches as the developers try to figure out the best strategies and then struggle to apply them to the test cases. Having different approaches makes it hard for new people coming onto the project to tell which is the best approach. It also has consequences when you make any changes to your application's UI. You might find yourself need to change dozens of automated UI tests, each with a different implementation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Human factors
&lt;/h3&gt;

&lt;p&gt;When bringing a new tool, technology or process into a team, there are a variety of human factors to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is the quality of life of using the new tool? Is it frustrating or slow?&lt;/li&gt;
&lt;li&gt;Is there someone available and willing to be a champion for the new technology? Who takes over if they leave?&lt;/li&gt;
&lt;li&gt;What happens when the tool causes delays? Are automated tests going to get dropped when you run out of time? How much extra time will the business tolerate automated UI tests adding to a feature?&lt;/li&gt;
&lt;li&gt;What happens if the tool gains a bad reputation amongst the team?&lt;/li&gt;
&lt;li&gt;Is everyone on board with the value of writing automated tests, or do they believe it is a waste of time?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we have covered so far, there are a lot of potential pain points and many questions regarding the value of these tests. Without answers, your test suite is unlikely to last for long.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is there a better option?
&lt;/h2&gt;

&lt;p&gt;Perhaps creating automated UI tests aren't looking like such an appealing option. However, you also still don't want manual regression testing taking so much time, so what other options are there?&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't try to implement the entire manual regression script
&lt;/h3&gt;

&lt;p&gt;Avoid trying to automate your entire manual test suite. It was written considering manual human testers, not with the awareness of what is difficult or impossible with automation. It is vital that whoever is writing the automated tests has the option to decide not to automate a test case. Be ruthless about culling what you automate. Automating features that are not appropriate to automate will not only take a long time, but result in the flakiest and hardest to maintain tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fill out the rest of the Test Pyramid first
&lt;/h3&gt;

&lt;p&gt;No single type of test should provide complete coverage. You want a variety of tests with varying levels of specificity and isolation — lots of specific, isolated tests at the bottom of the pyramid. Then, as you move up, there are fewer tests that get less specific and cover more parts of the application. Unit tests on the bottom, then integration tests, then end-to-end tests.&lt;/p&gt;

&lt;p&gt;Every layer of the pyramid works in concert and has different strengths and weaknesses.&lt;br&gt;
If possible, we would rather cover as much as we can at the unit and integration layers. These tests are easier to write, provide more specific feedback, and can be run during development. Unit tests are better for covering edge cases and error scenarios. Automated UI tests can cover UI logic that, depending on your application, might not be possible to cover with unit tests. Automated UI tests also test that multiple parts of your application work together as expected.&lt;/p&gt;

&lt;p&gt;What does your application's pyramid currently look like, and what will it look like after your planned UI automation suite? Is it upside down, or hourglass-shaped? Are you planning to write too many UI automation tests because the rest of the pyramid isn't there? &lt;/p&gt;

&lt;p&gt;No one type of test can provide you with complete test coverage. If you already have existing unit and integration tests, you are probably already covering steps of your manual regression test script. There is little value writing complicated automated UI tests to cover something already covered. Rather than replacing your manual regression tests 1-to-1 with automated UI tests, can you replace them with a combination of tests of different types? &lt;/p&gt;

&lt;h3&gt;
  
  
  Revisit commercial tooling
&lt;/h3&gt;

&lt;p&gt;Revisit why you chose to write to automated UI tests by hand. Are those reasons still correct after taking into account the difficulties of hand-writing automated tests? One of the primary reasons we had dismissed commercial tooling was a concern that it couldn't cover all of our manual test suite. Many months of work later and hand-written UI tests were so slow to write that we hadn't even made a dent in what we had hoped to cover.&lt;/p&gt;

&lt;h3&gt;
  
  
  Subcutaneous testing
&lt;/h3&gt;

&lt;p&gt;Are you writing UI automation to test your UI, or to facilitate end-to-end tests? If you don't need to test the UI layer, then subcutaneous testing might be a better alternative. This approach lets you perform your end-to-end tests a step below the UI layer. Rather than clicking buttons or filling in text fields, you call event handlers and set public properties directly on your view models. This approach avoids the difficulties of interacting with the UI and of using an automation framework. The disadvantage of this approach is that depending on the technology your application is using, there might not be a lot of specific guidance available. Our application is written in UWP, so we had to figure out for ourselves how to run it from our test framework with the UI mocked out. Once it was working, it proved significantly faster and easier to use than automated UI testing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The potential benefits of UI automation are exciting, find bugs, free up QA time, eliminate manual regression testing and get feedback to developers during their sprint. However, as with any significant new technology, it is essential to do some investigation up front. Hopefully, the above has provided some questions to ask before you start automating your manual regression test suite by hand. It might not be a good fit, either for the regression bugs you expect to find, the architecture of your application, or who you expect to be writing and maintaining the tests. There are challenges: dealing with unreliable data, a UI that is doing more than you might expect, and flakiness and bad error messages from your automation framework. You need to ask who is going to write the tests, and who is going to champion them when the going gets tough. Finally, have you compared hand-written tests with other commercial products available, with writing more integration and unit tests, or with writing subcutaneous tests?&lt;/p&gt;

&lt;p&gt;What has your experiences with writing Automated UI tests been? Perhaps you have some advice for those of us that have been struggling? Let us know in the comments below.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why our team cancelled our move to microservices</title>
      <dc:creator>Steven Lemon</dc:creator>
      <pubDate>Fri, 09 Aug 2019 11:09:04 +0000</pubDate>
      <link>https://dev.to/twynsicle/why-our-team-cancelled-our-move-to-microservices-1ln8</link>
      <guid>https://dev.to/twynsicle/why-our-team-cancelled-our-move-to-microservices-1ln8</guid>
      <description>&lt;p&gt;Recently our development team had a small break in our feature delivery schedule. Technical leadership decided that this time would be best spent splitting our monolithic architecture into microservices. After a month of investigation and preparation, we cancelled the move, instead deciding to stick with our monolith. For us, microservices were not only going to not help us; they were going to hurt our development process. Microservices had been sold to us as the ideal architectural for perhaps a year now. So we were surprised to find out they weren't a good fit for us. I thought it would be interesting to present a case study of our experiences, and why our team decided against them. &lt;/p&gt;

&lt;h1&gt;
  
  
  Identifying finding problems and early compromises
&lt;/h1&gt;

&lt;h3&gt;
  
  
  We were heavily reliant on a third party
&lt;/h3&gt;

&lt;p&gt;Our application is a custom UI over the top of an existing external product, integrating some of our custom business rules and presenting a touch-friendly user interface. Our client is a UWP app, and we have a range of back end services that transform between our domain and the third party's domain.&lt;/p&gt;

&lt;p&gt;Building on top of a third party impacted how we could divide our domain into microservices. For example, our application occasionally has to convert features between domains. Making one part of the third party's domain act and feel like it was part of a different domain in our UI. This swap was not so bad when we had a single service between our front end and the third party. However, the domain switching caused us much confusion when we tried to split our domains into separate microservices. Did our microservices follow the same divisions as the 3rd party and we duplicate the front-ends requirements across both services? Or, did we divide the microservices according to our domains and have one microservice need to fetch from two separate areas of the third-party. Both felt like a violation of microservice guidelines and like they would lead to additional coupling.&lt;/p&gt;

&lt;p&gt;We frequently worked in tandem with the external party, with features requiring both parties to make changes. Effectively, the third party was an additional team. Working so closely together meant we had to lockstep our release process with theirs. A benefit of microservices is that each team can be responsible for releasing their services independently and without coordination with other teams. Coordinating releases not just across teams, but across companies prevented us from gaining those advantages.&lt;/p&gt;

&lt;p&gt;One of the central ideas of microservices is restructuring away from separate teams being responsible for separate layers. In a microservices architecture, each team is responsible for the full stack addressing their business concern. For us, since one of our layers was an entirely separate company, this restructuring was not possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  We couldn't sufficiently isolate each microservice
&lt;/h3&gt;

&lt;p&gt;We couldn't identify any obvious candidates in our monolith to be broken out into a microservice. So instead, we started drawing arbitrary lines between our domain models, and from this, we had the list of microservices we were to create. However, once we started investigating, we found a lot of shared business logic and implicit coupling between the soon to be separate microservice domains. Some further attempts were made to subdivide these microservices into smaller and smaller pieces, but that left us with even more coupling, messages buses everywhere, and a potential big bang of immediately going from one service to ten or more microservices.&lt;/p&gt;

&lt;p&gt;The reason everything was so coupled and hard to break up was that the monolith we were trying to separate only served a single business concern. One of the overarching design goals of our client application was to bring the disparate concepts in the third party base application together. We are creating workflows that crossed domains and grouping features for the user's convenience. In essence, the UI had spent the last four years pushing everything together.&lt;/p&gt;

&lt;p&gt;Somewhere along the way, we had misunderstood how microservices should be isolated and underestimated the importance of choosing the right boundaries between services. The only ways we could break down our monolith meant that implementing a standard 'feature' would involve updating multiple microservices at the same time. Having each feature requiring different combinations of microservices prevented any microservice from being owned by a single team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sharing microservices
&lt;/h3&gt;

&lt;p&gt;We have approximately 12 developers spread across 2 feature teams and a support team. The work fluctuated enough that no team was locked to any area of the application. It was not uncommon to have both teams touching the same area of the code at once. We were not able to assign ownership of any potential microservice to a single team.&lt;/p&gt;

&lt;p&gt;It is useful to bear Conway's law in mind when considering the shape of your architecture. It states that your software's architecture grows in a way that mimics how your organization and teams are structured. Lots of isolated microservices make sense if you have a bunch of isolated teams working on separate business concerns. However, few teams working on shared features better suit a single shared location.&lt;/p&gt;

&lt;h3&gt;
  
  
  The platform wasn't ready yet
&lt;/h3&gt;

&lt;p&gt;Various issues meant that for at least 6 months, we would be hosting our new microservices next to our monolith in IIS. We wouldn't have access to many of the standard tools associated with microservices such as containers, Kubernetes, service buses, API gateways, etc. Not having these tools was going to make it more difficult for the microservices to communicate with each other. So instead, we decided that each microservice would duplicate any shared logic along with the common reads and transformations from our storage layer. Because we couldn't isolate any of our services properly, this was going to mean that we would be left with a significant amount of duplication. For example, we identified one particularly complicated and essential piece of business logic that would have to be copy-pasted and maintained across 4 of the planned microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  We didn't have a clear picture of the future
&lt;/h3&gt;

&lt;p&gt;The development teams had a rough idea of the next 6 months and no information about what was beyond that. Further, the business changed it's mind frequently. It wasn't uncommon for requirements to change mid feature. This uncertainty made creating microservices more fraught, as we couldn't predict what new links would pop up, even in the short term. Would the connections and coupling between the planned microservices grow? Would we have to spend time in a few months joining them all back together again? We had already tried creating a proof concept microservice earlier this year, only to have it nixed as the business changed its requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time frames were tight
&lt;/h3&gt;

&lt;p&gt;We had a tiny window, just large enough split our monolith into the list of microservices we had been given. What we didn't have was any extra time to allow us to reflect on what we had created or alter course if required. There was no time in the schedule for plan B. We were going to be stuck with whatever we created. Since we were discovering many issues and challenges in the planning stage, let alone the implementation phase, this caused the development team much concern.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lacking experience
&lt;/h3&gt;

&lt;p&gt;Compounding the risks and time pressures, none of the people responsible for architecting or implementing the microservices architecture had any specific prior experience. This was exacerbated by not having a lot of the standard tooling ready to use, meaning we would be implementing the platform ourselves. Conversations with some people with experience with microservices, but who weren't involved, raised more red flags. Suggesting infrastructure, we wouldn't have, pointing out the consequences of where we had drawn the lines between our domain models.&lt;/p&gt;

&lt;p&gt;So far, our plan involved lots of compromises that deviated from standard microservice patterns, tight time-frames. There was no expert guidance and a strong likelihood of making many mistakes and learning lessons the hard way. The development team started to look nervous.&lt;/p&gt;

&lt;h1&gt;
  
  
  What were we trying to achieve again?
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Is this addressing our pain points?
&lt;/h3&gt;

&lt;p&gt;Once everything started getting hard, and the clear path forward started to get lost, we paused, and realized we didn't know why we were doing any of this. We didn't have a list of our pain points, and we had no clear understanding of how this would help solve any pain points we do have. Worse, microservices might be just about to create a whole set of new problems for us.&lt;br&gt;
We started pressing these issues, what benefits are we supposed to be getting, and what problems are we trying to solve? We set more and more meetings trying to figure it all out, every coffee break and every conversation between developers was discussing and questioning microservices, and we still couldn't get a straight answer why.&lt;br&gt;
As it turned out, we did have other, more pressing pain points, which had been ignored in the drive to towards microservices. Unfortunately, we might have run out of time to address those adequately, meaning we had neither microservices nor anything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  What were the potential benefits?
&lt;/h3&gt;

&lt;p&gt;Once we realized we had no idea why we were heading towards microservices, we paused and started to investigate for ourselves the benefits that microservices typically provide.&lt;/p&gt;

&lt;h4&gt;
  
  
  Autonomy
&lt;/h4&gt;

&lt;p&gt;Microservices allow your team to have control over the full stack they require to deliver a feature. The benefit of this separation is a reduction in the amount of coordination you require with other teams. You won't be affecting their work, and they won't be affecting yours.&lt;/p&gt;

&lt;h4&gt;
  
  
  Allows your team to specialize
&lt;/h4&gt;

&lt;p&gt;In a monolith, any team can end up working on anything. Ownership of any feature or area isn't a given. With each team owning their set of services, they can build expertise in that particular business concern. They get to understand the businesses rules and requirements in their domain. They know how their software stack is structured and implemented and can have greater confidence when making changes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Easier to scale
&lt;/h4&gt;

&lt;p&gt;With microservices, you can scale each service according to its performance needs. With a monolith, while you can also scale horizontally across more servers, you can't scale each component of the monolith separate to one another. Further, this granularity makes it easier to scale services up and down as required. Perhaps you are anticipating some additional load, or need some breathing room while you sort out performance issues. &lt;/p&gt;

&lt;h4&gt;
  
  
  Easier to Rollback
&lt;/h4&gt;

&lt;p&gt;If each feature only requires a change to a single microservice, then that feature could be rolled back without affecting the work of other teams. Further, microservices help reduce the amount of your system that could be taken down by a single fault. &lt;/p&gt;

&lt;h4&gt;
  
  
  Easier to release and easier to release more frequently
&lt;/h4&gt;

&lt;p&gt;If you have an extensive system, each release becomes time-consuming and risky. There is a lot that needs to be covered by regression testing, limiting your release cadence. You might need sign-off from multiple people and coordination between all of the teams involved in each release. A bug or regression from a team you've never even heard of can hold up time-sensitive features that you need to get out the door. Microservices limit the scope of changes and reduce the amount of coordination you require between teams. Teams can release according to their own schedule rather than being bound by the cadence of a monolith.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use the most appropriate technologies
&lt;/h4&gt;

&lt;p&gt;Microservices give your team the ability to choose the most appropriate technology for their team and the problems they are trying to solve. Perhaps they can use a modern technology, whereas monoliths can be hard to upgrade and stuck on outdated platforms.&lt;/p&gt;

&lt;h4&gt;
  
  
  Easier path to upgrading
&lt;/h4&gt;

&lt;p&gt;Upgrading the framework used by a large application is never fun or risk-free in the best of circumstances. It is much much harder when you need to coordinate sweeping, interlinked changes across multiple teams. Smaller, isolated services give you the option of only upgrading the services that require the update or allowing you to perform the upgrade one service and one team at a time.&lt;/p&gt;

&lt;h4&gt;
  
  
  Protect from change
&lt;/h4&gt;

&lt;p&gt;Different parts of your application change at different rates. Most of your application likely hasn't been changed in months, or even years. Separating rarely changed code away from areas with frequent churn allows you to reduce the risk of accidental regressions.&lt;/p&gt;

&lt;h4&gt;
  
  
  Smaller
&lt;/h4&gt;

&lt;p&gt;A smaller service is much easier to reason about and understand. Further, being changed by only a single team means its design stays consistent. Its smaller size makes it easier to perform extensive refactoring. By comparison, a monolith may have an inconsistent, evolutionary architecture as the opinions of different teams cause it to vary over time.&lt;/p&gt;

&lt;h4&gt;
  
  
  Conclusion of benefits
&lt;/h4&gt;

&lt;p&gt;There are a lot of potential benefits to adopting microservices. However, were we able to gain any of them? &lt;/p&gt;

&lt;p&gt;Ultimately, parts of our architecture that we couldn't change and the compromises we had to make undermined the benefits. Microservices being a floating pool shared between all teams and features spread thinly across multiple shared microservices meant that we lost the benefits of isolation: reduced coordination, specialization and benefits that flowed on from these. The variation between microservices, instead of being a strength, became a disadvantage. Each feature would require learning how a new microservice worked and what changes other teams had made to it. Our reliance on a third party stopped us improving our reliance cadence and reduced the benefit we would get from independently scaling our services.&lt;/p&gt;

&lt;h1&gt;
  
  
  Weighing up the advantages and disadvantages
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Killing a fly with an elephant gun
&lt;/h3&gt;

&lt;p&gt;Adopting microservices isn't free. There is a vast list of additional concerns that you need to address. We would need to revisit many concerns that we had previously addressed in our monolith. For example, we would need to address or revisit: logging, monitoring, exception handling, fault tolerance, fallbacks, microservice to microservice communication, message formats, containerization, service discovery, backups, telemetry, alerts, tracing, build pipelines, release pipelines, tooling, sharing infrastructure code, documentation, scaling, timezone support, staged rollouts, API versioning, network latency, health checks, load balancing, CDC testing, fault tolerance, debugging and developing multiple microservices in our local development environment. &lt;/p&gt;

&lt;p&gt;To make matters worse, without a microservices platform ready, we would have to be working a lot of the above list out for ourselves. We already had pain points and difficulties moving to microservices; we established that we weren't going to benefit from the advantages of moving to microservices, and we had a long list of additional work to set up and maintain to support microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microservices in name only
&lt;/h3&gt;

&lt;p&gt;The following image demonstrates our current monolith, our planned architecture alongside a comparison of how microservice might look. Structurally, our new architecture still closely resembled our monolith, with everything still tightly linked together. Should we have even been using the microservices label to describe what we were doing?&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F32wtqur02t6x8h8makyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F32wtqur02t6x8h8makyj.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Was our monolith that bad?
&lt;/h3&gt;

&lt;p&gt;We were using monolith like a loaded term. As if saying "monolith" implies something terrible, and "microservices" implies something good. Once we looked past the stereotypes and branding, the development team had very few issues with our "monolith." It might have been one of the most pain-free parts of our entire system. It was straightforward to develop in and extend since it was mostly a passthrough to a third party. We didn't need to spend much time working on it. We had an excellent CI/CD setup, which made it easy to deploy and rollback. Our branching and testing strategies ensured that few issues that made it into production.&lt;/p&gt;

&lt;h3&gt;
  
  
  Realizing I had used microservices before
&lt;/h3&gt;

&lt;p&gt;At this point, I realized that I did have experience with a microservice in a previous role. We had never referred to it as a microservice, and it probably didn't follow all of the "rules" of microservices, but it certainly solved the same problems and gave us the same benefits.&lt;br&gt;
We were a small team of 5 in a company of about 200 developers. Maybe 5% of our back-end work was in the companies shared monolith, a vast C# application. The rest of our time, we were working within our two Node services.&lt;/p&gt;

&lt;p&gt;We disliked working in the monolith. It was slow to work in, compile and run tests for, the architecture was varied to the point of unknowable, random stuff keep showing up in the build steps. Multiple times we had a high priority piece of work for a customer get delayed weeks because a team I had never heard of had regressed in functionality. Periodic technology updates took months as they required coordination across the entire company. Pull requests could be held up for weeks while we waited for approval from entirely separate teams.&lt;/p&gt;

&lt;p&gt;Meanwhile, our two services were small; we had full control of their development, architecture and deployment. Once when we were having performance issues, we doubled the number of instances in production until we had resolved the underlying issue. We rarely had to coordinate with other teams. Having our service in TypeScript allowed our team of predominately front end developers to the same language on the front and back end. Best of all, it allowed us to include our complicated rule calculation engine in both our client and in our back-end validation and reporting services. Our team focused on a very narrow business concern, which we all became experts in.&lt;/p&gt;

&lt;h3&gt;
  
  
  More than just a technology problem
&lt;/h3&gt;

&lt;p&gt;The more we looked into microservices, the more it seemed that it was less about technology and more about structuring teams and the work that came into them. Had we made a mistake approaching microservices as purely a technology problem?&lt;/p&gt;

&lt;p&gt;Was restructuring the teams to be dedicated to separate business concerns practical? Could upcoming feature work be cleanly divided between the monoliths domains? Would there be enough work for all teams, or would a team be left without work? Would a team get slammed with mountains of high priority work that they couldn't share out? Would the same issues that made it hard for us to divide our monolith also prevent our management tier from dividing up incoming work? Was their appetite for this sort of transformation? There were many questions regarding the bigger picture that didn't have answers for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting from a to b
&lt;/h3&gt;

&lt;p&gt;Our plan to get to microservices was a big bang. Everyone stops feature work for a couple of months and starts splitting up our monolith. Even though many of the prerequisites weren't ready. We were forcing the way ahead rather than waiting for either the need to arise or natural candidates to emerge.&lt;/p&gt;

&lt;p&gt;Not only was this not a very good way from getting from a to b, but it was also backwards. Create all of the microservices first, then set up the infrastructure for them and completely ignore the aspects structuring the teams and incoming work. Instead, if we had started by restructuring our teams around dedicated business concerns, then gotten the infrastructure ready, we set the stage for microservices to naturally emerge. If any new business concerns emerged, they could be placed directly into a new service.&lt;/p&gt;

&lt;p&gt;By forcing microservices, it meant that we also had to choose upfront the size of each microservice. There is much conflicting advice about how large (or small) to make each microservice. Some articles suggested that each microservice should be large enough for one team. Others suggested that each microservice should be small enough you can keep the structure in your head or even so small that you could rewrite it in two weeks. Other suggested they should be the size of each business concern. Leadership decided to split our microservices up based on our domain models and then keep dividing them smaller if any issues came up. This lead to many of the issues mentioned above with teams and features needing to share microservices. In hindsight, if we had let microservices naturally emerge after everything else was in place, we might have ended up with microservices of a practical size.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cancelling
&lt;/h3&gt;

&lt;p&gt;As microservices day 1 drew closer and closer, our team just kept finding more and more issues. Creating more compromises and reducing the benefits further. Four days out from the first sprint of beginning the implementation of our microservices, we still couldn't identify any gains, and the list of problems and disadvantages was long enough to form the seed for this rather long blog post. We called a meeting, and despite what leadership wanted, the answer to microservices was written on every developer's face. Our move to microservices was cancelled.&lt;/p&gt;

&lt;h3&gt;
  
  
  So what did we do instead?
&lt;/h3&gt;

&lt;p&gt;The fervour of moving to microservices had meant that the alternatives hadn't been investigated. Only after we abandoned microservices could we investigate other options. Ultimately, rather than separate our monolith into separate services, we started to break our solution into separate projects within the existing monolith. This division gave us a bit of additional structure and a better indication of where coupling and duplication existed, without the extra weight and challenges of microservices.&lt;/p&gt;

&lt;p&gt;Further, this structure would make our domain models clearer, allowing us to evaluate candidates for any future microservices more easily. If something did turn out to be a suitable candidate, that project could then "fall out" of our monolith into a microservice, without having to be untangled.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Leadership set the direction of microservices without consideration for the challenges and state of our application. After evaluating it, we found that microservices weren't a fit for us, and required significant compromises. The compromises robbed us of any of the benefits and meant that moving to microservices was a net loss. Microservices had been decided on without evaluating non-technical concerns like team structure and incoming work. After months of investigation and work, we abandoned the project and spent the remaining time performing some minor refactors to our "monolith". &lt;/p&gt;

</description>
      <category>architecture</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
