<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shubham Sharma</title>
    <description>The latest articles on DEV Community by Shubham Sharma (@sharma-tech).</description>
    <link>https://dev.to/sharma-tech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sharma-tech"/>
    <language>en</language>
    <item>
      <title>The Dark Side of Don't Repeat Yourself</title>
      <dc:creator>Shubham Sharma</dc:creator>
      <pubDate>Wed, 06 Nov 2024 06:11:24 +0000</pubDate>
      <link>https://dev.to/sharma-tech/the-dark-side-of-dont-repeat-yourself-o4k</link>
      <guid>https://dev.to/sharma-tech/the-dark-side-of-dont-repeat-yourself-o4k</guid>
      <description>&lt;h2&gt;
  
  
  Why Over-Emphasising DRY Can Hurt Code Quality
&lt;/h2&gt;




&lt;p&gt;The “Don’t Repeat Yourself” (DRY) principle is a foundational tenet of software development, widely taught as a means to create more maintainable, modular, and efficient code. But despite its acclaim, DRY has a less-discussed downside, particularly when developers wield it indiscriminately. Overzealous application of DRY can introduce unnecessary abstractions, reduce readability, and ultimately create more technical debt rather than reducing it. This article will explore why DRY is sometimes dangerous, examining both practical and theoretical issues, and outline strategies for applying it in a balanced, effective manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Understanding DRY's Original Purpose
&lt;/h3&gt;

&lt;p&gt;At its core, DRY was introduced to prevent "knowledge duplication," not necessarily line-for-line code duplication. It encourages developers to avoid re-implementing logic or business rules in multiple places, which would increase the cost of changes by requiring updates in multiple locations. The idea was that duplicating "knowledge" could lead to inconsistencies, where one part of the code changes but others don’t, resulting in errors.&lt;/p&gt;

&lt;p&gt;However, DRY has evolved into a more rigid mandate in many teams, focusing on eliminating even minor similarities in code. This perspective misses the underlying purpose of DRY and leads to more rigid, brittle code structures. Instead of creating an adaptable codebase, misapplied DRY can lead to code that’s difficult to understand, maintain, and extend.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Dangers of Premature Abstraction
&lt;/h3&gt;

&lt;p&gt;One of the primary issues with overusing DRY is premature abstraction. Developers often create abstractions to encapsulate what they perceive as similar behaviour, but when this is done prematurely, it forces a narrow view on the code's structure before the problem is fully understood.&lt;/p&gt;

&lt;p&gt;For example, imagine a developer is working on a module that processes different food items. Cooking an egg and cooking meat might both involve "placing an ingredient on a hot surface," but these processes differ significantly. Eggs might require a quick sear, while meat might need gradual cooking over a longer period. Prematurely abstracting this "cooking" process may initially save a few lines of code, but it also removes the flexibility to handle these processes independently in the future.&lt;/p&gt;

&lt;p&gt;By focusing on DRY too early, developers may close doors to better abstractions, losing the opportunity to improve the design once they have a clearer understanding of the domain. This often results in "de-DRYing" later or shoehorning new requirements into an existing, ill-fitting abstraction.&lt;/p&gt;

&lt;h4&gt;
  
  
  Practical Consequences of Premature Abstraction:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Reduced Readability&lt;/strong&gt;: When every function or method is reduced to share an abstraction, it can be challenging for others (or even the original developer) to understand the code's intent at a glance.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Increased Complexity&lt;/strong&gt;: Developers often need to add configuration options or conditional logic to make a single abstraction work for all cases, making the code harder to follow and maintain.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fragile Code&lt;/strong&gt;: Abstractions that try to do too much become brittle, as minor changes in one part of the code can cause unintended effects elsewhere.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Losing Sight of the Data Structure
&lt;/h3&gt;

&lt;p&gt;Overuse of DRY can also obscure the structure and flow of data in an application, particularly when developers attempt to fit diverse data manipulations into a common base structure. This often happens when they create a generic “wrapper” class or base structure to represent different entities that share only superficial similarities. While this can save lines of code in the short term, it sacrifices clarity in data handling, which is crucial for understanding and debugging.&lt;/p&gt;

&lt;p&gt;In applications with complex data flows, preserving the "shape" of the data is essential. Distinct operations and data structures offer clear, logical boundaries that help convey the data's purpose and relationships. When developers "DRY up" structures, they may hide these boundaries, making it difficult to grasp the original data intent. This can also introduce unintended behaviours, as methods become responsible for handling diverse data forms they weren’t designed to manage.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The Myth of DRY and Maintainability
&lt;/h3&gt;

&lt;p&gt;One of the most commonly cited reasons for enforcing DRY is maintainability. The argument is that having a single source of truth for a piece of logic will make it easier to update that logic in the future. However, in practice, the supposed benefits of DRY often don’t outweigh the drawbacks, especially for code that rarely changes.&lt;/p&gt;

&lt;p&gt;For instance, developers might fear the need to modify similar code in multiple locations, but modern development tools such as &lt;code&gt;find-replace&lt;/code&gt; or &lt;code&gt;ripgrep&lt;/code&gt; make it easy to search and update patterns across a codebase. In reality, DRY should be applied with consideration for how often the code is likely to change. If the code in question is stable and unlikely to undergo frequent updates, the cost of maintaining minor duplication is likely far lower than the complexity introduced by trying to create a single abstraction.&lt;/p&gt;

&lt;p&gt;Moreover, studies have shown that duplication is often less problematic than poorly conceived abstractions. Abstractions that are forced can lead to confusing dependencies, making the code harder to test, debug, and modify. As software engineer Sandi Metz famously said, “Duplication is far cheaper than the wrong abstraction.” DRY should be seen as a means to an end, not an end in itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. DRYing with a Pinch of Salt: Practical Tips
&lt;/h3&gt;

&lt;p&gt;Applying DRY effectively goes beyond simply removing duplication. It requires balancing readability, maintainability, and adaptability. Below are practical strategies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wait Until the Abstraction Emerges Naturally&lt;/strong&gt;: Avoid rushing into abstractions. Allow similar code patterns to develop until a clearer, more robust abstraction becomes evident. This prevents the creation of rigid structures that can hinder adaptability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Apply the AHA (Avoid Hasty Abstractions) Principle Alongside DRY&lt;/strong&gt;: AHA encourages developers to avoid premature abstractions. By letting abstractions grow naturally from clear patterns, we prevent forced generalisations that could lead to maintenance difficulties down the line.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Embrace OCF (Optimise for Change First)&lt;/strong&gt;: OCF encourages designing code with adaptability in mind, making it easier to modify as requirements evolve. Before abstracting code to reduce duplication, assess whether each section is likely to change independently. OCF helps prioritise flexible, modular code over rigid DRY abstractions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Follow the Single Responsibility Principle (SRP)&lt;/strong&gt;: SRP states that a function or module should have one and only one reason to change. When applying DRY, SRP helps ensure that abstractions are clean and cohesive, rather than attempting to handle multiple unrelated tasks. By adhering to SRP, we avoid “kitchen-sink” abstractions that can become unwieldy and confusing as they try to serve too many purposes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use the Small Functions Principle&lt;/strong&gt;: Small, focused functions make code easier to understand, test, and modify. Instead of creating a large, shared abstraction to eliminate duplication, consider breaking code into smaller, single-purpose functions that communicate intent clearly. Small functions encourage simplicity and modularity, both of which help preserve the flexibility needed for future changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Focus on Constants and Closed Abstractions&lt;/strong&gt;: Constants, configurations, and values that are stable are good candidates for DRY. JSON values, enums, and other fixed elements can safely be centralised without risking rigid abstractions that need frequent modification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use DRY for Knowledge, Not Syntax&lt;/strong&gt;: DRY is about eliminating duplicated &lt;em&gt;knowledge&lt;/em&gt;, not just identical lines of code. Focus on sharing logic, calculations, and rules that have a real benefit when centralised.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assess the Frequency of Change&lt;/strong&gt;: Before creating an abstraction, consider if the code is likely to evolve independently. If similar sections will diverge over time, keeping them separate may ultimately be more maintainable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Recognising and Avoiding Zealotry in Code Reviews
&lt;/h3&gt;

&lt;p&gt;One unfortunate side effect of DRY’s popularity is the zealotry that can develop around it. Many developers, especially in code reviews, become so focused on enforcing DRY that they fail to see the bigger picture, such as the code’s readability, maintainability, and adaptability. This DRY zealotry often manifests as nitpicking in code reviews, where reviewers insist on combining even loosely related pieces of code.&lt;/p&gt;

&lt;p&gt;To mitigate this, development teams should encourage a balanced view of DRY. Code reviews should focus on the intent of the code rather than enforcing superficial principles like DRY at all costs. Here are some guidelines to foster a more balanced code review culture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Encourage Developers to Defend Their Design Choices&lt;/strong&gt;: Developers should be able to justify their choices when they diverge from DRY. This encourages thoughtful discussion about trade-offs and design decisions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Focus on Readability and Domain Logic&lt;/strong&gt;: Instead of forcing every line to adhere to DRY, consider whether the code clearly expresses the domain's logic and intent. If enforcing DRY makes it harder to understand the business rules or purpose of a section, it’s likely a poor abstraction.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Use DRY as a Discussion Point, Not a Mandate&lt;/strong&gt;: In code reviews, DRY should be treated as a point of consideration rather than a rule. A good code review culture is one that prioritizes meaningful abstractions, flexibility, and simplicity over rigid adherence to principles.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. The DRY Trap in Specific Communities
&lt;/h3&gt;

&lt;p&gt;The issue of over-DRYing isn’t equally prevalent across all development communities. Some programming communities or frameworks encourage excessive adherence to DRY, creating cultures where minor duplication is considered an unforgivable offense. Developers working in these environments should be cautious about adopting DRY as an automatic reflex and instead strive for a pragmatic approach to duplication.&lt;/p&gt;

&lt;p&gt;A balanced DRY approach involves understanding when to embrace some duplication, particularly when working with frameworks that heavily promote DRY practices. It’s essential to assess whether DRY truly improves the code or if it merely reflects the coding norms of the framework.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. The Benefits of Strategic Duplication
&lt;/h3&gt;

&lt;p&gt;In some cases, duplication is not just acceptable but desirable. Here’s why intentional duplication can sometimes be the best choice:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Clearer Intent&lt;/strong&gt;: When similar operations are duplicated, each occurrence can retain its unique purpose, making the intent clearer and reducing the mental overhead of understanding an abstraction.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Independent Evolution&lt;/strong&gt;: Duplication allows for similar pieces of code to evolve independently without impacting each other. This can be crucial when requirements change for only one instance of the duplicated code.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Reduced Complexity&lt;/strong&gt;: A simple, duplicated solution can often be more straightforward than a complex, overly generalised one, especially for code that isn’t expected to change frequently.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;DRY is a powerful concept, but like any tool, it must be used judiciously. Overzealous application of DRY can lead to brittle code, premature abstractions, and decreased readability. By taking a thoughtful approach—considering factors such as readability, flexibility, and the frequency of change—developers can avoid the pitfalls of DRY and create more resilient, maintainable code. Remember, duplication is often cheaper than a poor abstraction. DRY should be a guide, not a strict rule, and balancing it with a clear understanding of the domain and future requirements is key to sustainable software development.&lt;/p&gt;

</description>
      <category>dry</category>
      <category>softwaredevelopment</category>
      <category>designpatterns</category>
    </item>
    <item>
      <title>The Case Against Mocking Libraries</title>
      <dc:creator>Shubham Sharma</dc:creator>
      <pubDate>Wed, 21 Aug 2024 07:51:40 +0000</pubDate>
      <link>https://dev.to/sharma-tech/the-case-against-mocking-libraries-5b35</link>
      <guid>https://dev.to/sharma-tech/the-case-against-mocking-libraries-5b35</guid>
      <description>&lt;p&gt;In the world of software testing, mocking libraries have long been a popular tool for isolating components and simulating dependencies. However, as our understanding of clean code and maintainable tests evolves, there's a growing sentiment that over-reliance on mocking libraries can lead to brittle, hard-to-maintain test suites. This article explores why you might want to reconsider your use of mocking libraries and opt for custom fakes instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Mocking Libraries
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ugly Syntax&lt;/strong&gt;: Many mocking libraries introduce their own DSL (Domain Specific Language) for setting up mocks. This can make tests harder to read and understand, especially for developers who aren't intimately familiar with the mocking framework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tight Coupling to Implementation&lt;/strong&gt;: Mocks often require detailed knowledge of the internal workings of the system under test. This can lead to tests that are tightly coupled to implementation details, making them fragile and prone to breaking when refactoring.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overuse and Abuse&lt;/strong&gt;: While mocking libraries can be useful for verifying specific interactions, they're often overused. Developers may find themselves mocking every dependency, leading to tests that are more about the mocks than the actual behaviour being tested.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inconsistent Assumptions&lt;/strong&gt;: When mocks are scattered throughout a test suite, each mock may make different assumptions about how a dependency should behave. This can lead to inconsistencies and make it harder to reason about the expected behaviour of the system.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Case for Custom Fakes
&lt;/h2&gt;

&lt;p&gt;Instead of relying on mocking libraries, consider creating your own fake implementations of dependencies. Here's why:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cleaner Syntax&lt;/strong&gt;: Custom fakes use plain language constructs, making them easier to read and understand without knowledge of a specific mocking framework.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reduced Coupling&lt;/strong&gt;: Fakes can be designed to mimic the public interface of a dependency without exposing implementation details, reducing coupling between tests and production code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistent Behaviour&lt;/strong&gt;: By creating a single fake implementation of a dependency, you ensure consistent behaviour across all tests that use that dependency.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Less Code&lt;/strong&gt;: Tests using fakes often require less setup code compared to those using mocking libraries, leading to more concise and focused tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Better Encapsulation&lt;/strong&gt;: Fakes allow you to encapsulate complex behaviour in a reusable way, which can be especially useful for simulating external services or complex components.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Example: Refactoring from Mocks to Fakes
&lt;/h2&gt;

&lt;p&gt;Let's look at an example of how we can refactor a test from using mocks to using a custom fake:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Using a mocking library&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;jest&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@jest/globals&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;IUserStore&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;findById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;userStore&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IUserStore&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

  &lt;span class="nf"&gt;activate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;userStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;UserService should activate user - using mocks&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Arrange&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userStoreMock&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;findById&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;jest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;mockReturnValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="na"&gt;store&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;jest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userStoreMock&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Act&lt;/span&gt;
  &lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;activate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Assert&lt;/span&gt;
  &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userStoreMock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;store&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveBeenCalledTimes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userStoreMock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;store&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toHaveBeenCalledWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Using a custom fake&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;UserStoreFake&lt;/span&gt; &lt;span class="k"&gt;implements&lt;/span&gt; &lt;span class="nx"&gt;IUserStore&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;users&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;findById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Simplified for example, would likely use ID in a real implementation&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IUser&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;UserService should activate user - using a fake&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Arrange&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;User&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userStoreFake&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UserStoreFake&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;UserService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userStoreFake&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Act&lt;/span&gt;
  &lt;span class="nx"&gt;userService&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;activate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Assert&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;storedUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;userStoreFake&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;storedUser&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toBeDefined&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the refactored version, we've replaced the mock with a custom &lt;code&gt;UserStoreFake&lt;/code&gt;. This fake implementation can be reused across multiple tests, ensuring consistent behaviour and reducing the amount of setup code needed in each test.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While mocking libraries have their place in the testing toolkit, they should be used judiciously. By favouring custom fakes over mocks, we can create cleaner, more maintainable, and more resilient test suites. This approach encourages us to think more deeply about the contracts between components and helps ensure that our tests remain valuable as our codebase evolves.&lt;/p&gt;

&lt;p&gt;Remember, the goal of testing is not just to increase code coverage, but to provide confidence in the behaviour of our system. By writing cleaner tests with custom fakes, we can achieve this goal more effectively and with less maintenance overhead.&lt;/p&gt;

</description>
      <category>softwaretesting</category>
      <category>jest</category>
      <category>mocking</category>
    </item>
    <item>
      <title>The Easiest Way to Identify Flaky Tests in Jest</title>
      <dc:creator>Shubham Sharma</dc:creator>
      <pubDate>Sun, 05 May 2024 07:19:30 +0000</pubDate>
      <link>https://dev.to/sharma-tech/the-easiest-way-to-identify-flaky-tests-in-jest-2kg4</link>
      <guid>https://dev.to/sharma-tech/the-easiest-way-to-identify-flaky-tests-in-jest-2kg4</guid>
      <description>&lt;p&gt;Flaky tests are the bane of every developer’s existence. They pass one minute and fail the next, making it challenging to trust your test suite and causing frustration and wasted time. Fortunately, there’s a tool called &lt;code&gt;flaky-test-detector&lt;/code&gt; that employs the KISS (Keep It Simple, Stupid) approach to help you identify and manage flaky tests in your Jest test suite.&lt;/p&gt;

&lt;h3&gt;
  
  
  Introducing &lt;code&gt;flaky-test-detector&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;flaky-test-detector&lt;/code&gt; is a command-line tool that helps you identify flaky tests in your test suite by running your tests multiple times and analysing the results. It supports various test runners, including Jest, and can be easily integrated into your project. For this example, we’ll be using yarn to install and run &lt;code&gt;flaky-test-detector&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting up &lt;code&gt;flaky-test-detector with Jest&lt;/code&gt;:
&lt;/h3&gt;

&lt;p&gt;To use flaky-test-detector with Jest, you'll need to follow these steps:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Generate Jest test results in JUnit format:
&lt;/h4&gt;

&lt;p&gt;First, you’ll need to generate Jest test results in the JUnit format (as this is a limitation with the tool). This can be done by using the jest-junit package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn add &lt;span class="nt"&gt;--dev&lt;/span&gt; jest-junit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, configure Jest to generate JUnit reports by adding the following to your jest.config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;reporters&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;default&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;jest-junit&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Install and configure flaky-test-detector
&lt;/h4&gt;

&lt;p&gt;Next, install flaky-test-detector and the JUnit parser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn add &lt;span class="nt"&gt;--dev&lt;/span&gt; @smartesting/flaky-test-detector
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, configure &lt;code&gt;flaky-test-detector&lt;/code&gt; to use the JUnit parser and run your Jest tests multiple times (e.g., 5 times):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;yarn flaky-test-detector &lt;span class="nt"&gt;--run-tests&lt;/span&gt; &lt;span class="s2"&gt;"yarn test"&lt;/span&gt; &lt;span class="nt"&gt;--repeat&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="nt"&gt;--test-output-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;junit.xml &lt;span class="nt"&gt;--test-output-format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;junit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want, you can turn this command-line execution into a script in your package.json, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;  
  &lt;/span&gt;&lt;span class="nl"&gt;"detectFlaky:unitTests"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"flaky-test-detector --run-tests &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;yarn test:browser&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; --repeat=5 --test-output-file=junit.xml --test-output-format=junit"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will run your Jest tests 5 times and generate a JUnit report named &lt;code&gt;junit.xml&lt;/code&gt;. The JUnit parser built into &lt;code&gt;flaky-test-detector&lt;/code&gt; will then parse the Jest/JUnit results and detect any flaky tests based on inconsistent test results across the 5 runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interpreting the Results
&lt;/h3&gt;

&lt;p&gt;After running &lt;code&gt;flaky-test-detector&lt;/code&gt;, it will output a summary of any flaky tests it encountered in the terminal. If you’re wondering how the tool works, the main error reporting happens in the &lt;code&gt;BasicReporter&lt;/code&gt; class. When flaky tests are detected, the &lt;code&gt;flakyTestsFound&lt;/code&gt; method is called, which logs the number of flaky tests found and their names to the &lt;code&gt;logger.error&lt;/code&gt; method.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Found 2 flaky tests
 - myFlakyTest1
 - myFlakyTest2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, after detecting flaky tests, &lt;code&gt;detectFlakyTests&lt;/code&gt; throws an error: &lt;code&gt;throw new Error('Flaky tests found')&lt;/code&gt;. This ensures that the process fails, providing a clear and actionable output when flaky tests are detected.&lt;/p&gt;

&lt;p&gt;With this information, you can investigate the root causes of the flaky tests and take appropriate action. Here are some potential mitigation strategies and examples:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Refactoring the Test&lt;/strong&gt;: If the flakiness is caused by issues within the test itself, such as race conditions or improper setup/teardown, refactoring the test code may be necessary. For example, if a test depends on the order of asynchronous operations, you could introduce synchronisation mechanisms or use libraries like async/await to ensure proper control flow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fixing Underlying Issues&lt;/strong&gt;: If the flakiness stems from issues in the production code, fixing the underlying bugs or race conditions should be the priority. This might involve adjusting concurrency control mechanisms, fixing data access race conditions, or addressing any other root causes that lead to non-deterministic behaviour.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improving Test Isolation&lt;/strong&gt;: Flaky tests can sometimes be caused by shared state or dependencies between tests. To mitigate this, you can leverage test doubles (mocks, stubs, or fakes) to isolate the tests from external dependencies or shared resources. This ensures that each test runs in a controlled and predictable environment, reducing the risk of flakiness.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improving Test Data Management&lt;/strong&gt;: If the flakiness is related to test data, consider using external data sources (e.g., JSON files or databases) or data factories to generate test data consistently. This approach separates test data from test logic, making it easier to maintain and reducing the risk of flakiness caused by hard-coded test data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Parallelisation and Test Ordering&lt;/strong&gt;: In some cases, flakiness can be caused by parallel test execution or test ordering issues. To address this, you can experiment with different parallelisation strategies or consider running tests in a specific order or in serial order by using &lt;code&gt;--runInBand&lt;/code&gt; to mitigate potential race conditions or conflicts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By leveraging these mitigation strategies and addressing the root causes of flakiness, you can gradually improve the reliability and stability of your test suite, ensuring that your tests provide consistent and trustworthy feedback throughout the development lifecycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Flaky tests can be a frustrating and time-consuming issue, but with the help of &lt;code&gt;flaky-test-detector&lt;/code&gt;, you can easily identify and manage them in your Jest test suite. By integrating this tool into your development workflow, you can ensure that your tests are reliable and consistent, saving you time and increasing your confidence in your codebase.&lt;/p&gt;

</description>
      <category>jest</category>
      <category>unittest</category>
      <category>flakytests</category>
    </item>
  </channel>
</rss>
