<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ildar Sharafeev</title>
    <description>The latest articles on DEV Community by Ildar Sharafeev (@srshifu).</description>
    <link>https://dev.to/srshifu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/srshifu"/>
    <language>en</language>
    <item>
      <title>State machines in React</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Tue, 19 Mar 2024 19:41:44 +0000</pubDate>
      <link>https://dev.to/srshifu/state-machines-in-react-2eo1</link>
      <guid>https://dev.to/srshifu/state-machines-in-react-2eo1</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3gp8j3Jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AH_u7sLeXRDkYl0DE.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3gp8j3Jg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AH_u7sLeXRDkYl0DE.jpg" alt="" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;em&gt;The Pragmatic Programmer&lt;/em&gt; book, the authors suggest the Finite State Machines pattern as one way of “juggling” real-world events to prevent writing a tightly coupled code. You might be familiar with state machines in various contexts such as Cloud Computing (hello, AWS Step Functions) or even hardware. But have you ever considered employing state machines to tackle real-world front-end challenges? How often have you found yourself building a multi-step wizard in your application, with numerous transitions that prompt users to input their details and select categories that will unblock various fill-review-submit form paths upon selection?&lt;/p&gt;

&lt;p&gt;Fear not, as this article will demonstrate how you can build a straightforward abstraction for state machine wizards in React.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Whatever the source, if we write applications that respond to events, and adjust what they do based on those events, those applications will work better in the real world.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learning.oreilly.com/library/view/the-pragmatic-programmer/9780135956977/"&gt;&lt;em&gt;The Pragmatic Programmer: your journey to mastery, 20th Anniversary Edition, 2nd Edition&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What are the state machines?
&lt;/h3&gt;

&lt;p&gt;A state machine is essentially a specification for handling incoming state (inputs) and generating a new state based on those inputs. It represents the behaviour of the system by encapsulating the various states the system can be in, along with the transitions between these states triggered by specific events or conditions.&lt;/p&gt;

&lt;p&gt;In the context of front-end development, state machines prove invaluable for managing user interfaces, especially those with intricate workflows or multiple possible paths. By applying the principles of state machines, developers can create more robust and predictable applications while fostering the creation of loosely coupled and reusable components.&lt;/p&gt;

&lt;p&gt;Below is an example of the simplest state machine representation. The state begins in &lt;em&gt;State 1&lt;/em&gt;; the state changes to &lt;em&gt;State 2&lt;/em&gt; given input ‘ &lt;strong&gt;X&lt;/strong&gt; ’, or to &lt;em&gt;State 3&lt;/em&gt; given input ‘ &lt;strong&gt;Y&lt;/strong&gt;  ‘.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oruQZo09--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ABhEDOQLssTe71Oqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oruQZo09--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ABhEDOQLssTe71Oqv.png" alt="" width="800" height="678"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the following sections, we’ll delve deeper into how state machines can be leveraged within the React framework through the prism of our example use case.&lt;/p&gt;

&lt;h3&gt;
  
  
  Real-world website example
&lt;/h3&gt;

&lt;p&gt;Let’s pretend we are building a website for an insurance company. Users arrive at the homepage and are greeted with a prominent “Purchase Insurance” button, initiating a streamlined product purchase flow in a wizard format. Within this wizard, clients navigate through the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enter Personal Details:&lt;/strong&gt; Users provide essential information such as insuree’s name, date of birth, and other pertinent details.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select Insurance Type:&lt;/strong&gt; Clients choose the type of insurance they’re interested in, whether it’s car, home, or a bundled package including both.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Choose Coverage and Get a Quote:&lt;/strong&gt; Users select their desired coverage options, and the wizard dynamically generates an instant quote based on their choices. Quote prices vary depending on the type of insurance selected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specify Insurance Details:&lt;/strong&gt; Users input specific insurance details such as start of coverage, payment schedule, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional Authentication:&lt;/strong&gt; If users aren’t already authenticated, they’re prompted to sign up or log in at this stage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provide Payment Information:&lt;/strong&gt; Clients enter their payment details, choosing between credit or debit card.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review &amp;amp; Confirmation:&lt;/strong&gt; The final step involves reviewing all entered information, agreeing to the terms, and clicking the “Confirm” button to complete the purchase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I will leave you for a minute to digest these requirements…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--T3iAc1Mn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/298/0%2AATEU6zexrY_STIBV.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--T3iAc1Mn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/298/0%2AATEU6zexrY_STIBV.jpg" alt="" width="298" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Are you still here? Good! Let’s try to break it down. The wizard encompasses steps unique to the selected insurance product (car, home, or bundle), as well as those common to all. To visualize this, we can represent the flow using a state machine diagram:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B3OANpB4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AEtsIGzHXjlnqfbeO.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B3OANpB4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AEtsIGzHXjlnqfbeO.jpg" alt="" width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the diagram above, each node represents a step corresponding to a view within the wizard. In fact, recalling your computer science lessons, the diagram embodies a directed acyclic graph (DAG), where each step is connected to another without introducing loops. While there may be three distinct paths depending on the insurance product selected, it’s worth noting that the path for bundled insurance shares steps with the other two paths, potentially allowing for the reuse of certain views.&lt;/p&gt;

&lt;p&gt;But how do we go about implementing this intricate process? Let’s explore that in the next section!&lt;/p&gt;

&lt;h3&gt;
  
  
  Design of state machines
&lt;/h3&gt;

&lt;p&gt;When tackling the implementation of our wizard, there are a few approaches we could take. One straightforward method might involve creating each step in the wizard as a standalone React component, which is a good starting point. However, managing the navigation logic with numerous if-else statements within each component can quickly become hard to read, test, and maintain, and prone to errors. This approach resembles a &lt;a href="https://en.wikipedia.org/wiki/Choreography"&gt;choreography&lt;/a&gt; architectural pattern, where each component is aware only of its immediate neighbours in the chain of command.&lt;/p&gt;

&lt;p&gt;Alternatively, we could opt for a centralized approach, leveraging a global state and a state management library like Redux or React Context to serve as a central mediator. But it still will require us to build branches of the code lianas there to handle Back and Next actions from the wizard. We need to have our orchestration logic to be as simple as possible!&lt;/p&gt;

&lt;p&gt;Inspired by &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-amazon-states-language.html"&gt;Amazon States Language specification&lt;/a&gt;, I thought: “What if we could have extracted orchestration logic in a standalone configuration file and pass it down to a UI component capable of interpreting it?”.&lt;/p&gt;

&lt;p&gt;This approach would allow us to not only reuse components for the steps but also the orchestration logic itself, leading to a more modular and maintainable solution. Let’s explore this idea further in the following sections.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;As a side note, it’s worth mentioning that there are existing libraries that leverage this architectural pattern and idea in general. One such library is&lt;/em&gt; &lt;a href="https://xstate.js.org/docs/recipes/react.html#local-state"&gt;&lt;em&gt;@xstate/react&lt;/em&gt;&lt;/a&gt;&lt;em&gt;, which offers a rich API contract for building state machines in React. While we won’t replicate it one-to-one, we’ll aim to build something simple yet sufficient to satisfy our needs.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building a foundation
&lt;/h3&gt;

&lt;p&gt;To lay the groundwork for our wizard implementation, let’s begin by selecting the technologies we’ll employ. For the UI library, I advocate for React due to its emphasis on creating reusable and modular components, aligning well with our objective.&lt;/p&gt;

&lt;p&gt;Now, let’s address the styling aspect. We require a lightweight and customizable solution to ensure flexibility in designing our wizard interface. In this regard, I propose utilizing Tailwind CSS, a utility-first and highly optimized CSS framework that allows developers to construct any user interface efficiently by leveraging pre-made CSS classes that straightforwardly map to CSS rules.&lt;/p&gt;

&lt;p&gt;Let’ start with installing our dependencies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -D react react-dom tailwindcss postcss autoprefixer typescript
npx tailwindcss init -p 
npx tsc --init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In tailwind.config.js we need to add paths to all our components:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  content: ["./index.html","./src/**/*.{js,jsx,ts,tsx}"],
  theme: {
    extend: {},
  },
  plugins: [],
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then we need to add &lt;a class="mentioned-user" href="https://dev.to/tailwind"&gt;@tailwind&lt;/a&gt; directives into our CSS file:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;If building a library, I would also recommend to move React dependencies to the &lt;code&gt;peerDependencies&lt;/code&gt; so consumers would aware that React is required and what versions of it are supported:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "peerDependencies": {
    "react": "&amp;gt;=16.8.0",
    "react-dom": "&amp;gt;=16.8.0"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the next section, we will focus on addressing of representation concerns for our component.&lt;/p&gt;
&lt;h3&gt;
  
  
  Building view layer
&lt;/h3&gt;

&lt;p&gt;Here is our simple React component that does not have any logic (yet!):&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;Upon reviewing this component, you might observe a few key points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Portals Usage:&lt;/strong&gt; We’re employing React portals to render our component. Portals offer a convenient way to render children components outside the DOM hierarchy of the parent component, ensuring flexibility and control over component rendering. See wonderful article about portals &lt;a href="https://www.developerway.com/posts/positioning-and-portals-in-react"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logical Slots:&lt;/strong&gt; Our component defines three logical slots: header, body, and footer. While the header and footer sections remain relatively static and don’t depend on the current step in the wizard, the body dynamically adapts based on the configuration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Footer Controls:&lt;/strong&gt; The footer segment hosts the wizard’s navigation controls, allowing users to perform actions such as canceling/closing the wizard, navigating back to the previous step, progressing to the next step, or submitting the form upon reaching the end of the flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is how it looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--36xmyRL1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Atn3XpPgbBAv5VqSL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--36xmyRL1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Atn3XpPgbBAv5VqSL.png" alt="" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we progress, we’ll infuse this component with logic to orchestrate the wizard’s behavior and enhance its functionality. But for now, let’s focus on establishing the foundational structure of our state machine configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  State machines configuration
&lt;/h3&gt;

&lt;p&gt;Let’s review the state machine configuration interface, which declares the orchestration flows for our wizard:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;I will break it down into its main parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;steps&lt;/strong&gt;. This is a crucial aspect of the configuration, representing all possible directions from a certain step when the user reaches this point in the wizard. Each step is uniquely identified and mapped to another step, as well as to the view that will be rendered in the wizard body. The step configuration may include a &lt;code&gt;canAdvance&lt;/code&gt; method, serving as a guardrail to prevent navigation to the next step under specific conditions (e.g., if there are errors in the current step). Additionally, for steps with multiple possible paths to the next step, the &lt;code&gt;choices&lt;/code&gt; attribute allows for defining these branching options. If a step configuration solely contains the &lt;code&gt;isTerminal&lt;/code&gt; flag, it indicates that the step is final, with no further transitions possible. In such cases, the wizard will render a Submit button instead of Next and close the wizard upon successful submission.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;initialStep&lt;/strong&gt;. This specifies the step where wizard flow will start at.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;views&lt;/strong&gt;. This configuration maps each step by its identifier to the corresponding React component type (note, not the element!) that will be rendered in the wizard body.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now having this contract, we can define DAG-like navigation model for our insurance purchase flow example:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;If you take a closer look at &lt;code&gt;authenticate&lt;/code&gt; step, you can notice that &lt;code&gt;canAdvance&lt;/code&gt; method does not rely on the state of the wizard - instead, it reads the data from localStorage. The same way we can determine whether we need to transition to &lt;code&gt;authenticate&lt;/code&gt; when completing &lt;code&gt;setInsurancePeriod&lt;/code&gt; step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why not define view as part of step configuration?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This separation of concerns, where the view is defined separately from the step configuration, was a deliberate design decision. By decoupling orchestration concerns from rendering, we ensure flexibility and maintainability in our implementation.&lt;/p&gt;

&lt;p&gt;Consider scenarios where we may need to reuse the same orchestration model with slightly different views. With our current design, this is easily achievable by replacing all views or a subset of views while keeping the underlying orchestration logic intact.&lt;/p&gt;

&lt;p&gt;In the later sections of this article, I’ll illustrate the usefulness of this approach by demonstrating how we can effortlessly switch between different views while maintaining the same wizard flow. This separation not only enhances code reusability but also streamlines the process of adapting our wizard interface to varying requirements or design changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building orchestration logic
&lt;/h3&gt;

&lt;p&gt;Now let’s go back to our &lt;code&gt;StateMachineWizard&lt;/code&gt; component to add integration with &lt;code&gt;StateMachineConfig&lt;/code&gt; we have just defined.&lt;/p&gt;

&lt;p&gt;First of all, we need to pass &lt;code&gt;stateMachineConfig&lt;/code&gt; as an input prop. For better performance, I would recommend to define this config as a constant living outside of React lifecycle!&lt;/p&gt;

&lt;p&gt;Now let’s define the component states we will be working with:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Key decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;navHistory&lt;/code&gt; state serves as a record of all transitions within the wizard flow, storing each step's name and the corresponding component rendered. Similar to React Router history, we can manipulate this history by pushing and popping items as users navigate through the wizard.&lt;/li&gt;
&lt;li&gt;We’ll maintain an additional state, &lt;code&gt;wizardDataByStep&lt;/code&gt;, to store incremental changes to the wizard states occurring at each step. This approach allows us to preserve the state that existed in the previous step, facilitating seamless navigation backward in the wizard.&lt;/li&gt;
&lt;li&gt;Calculate &lt;code&gt;aggregatedStepState&lt;/code&gt; on every step by accumulating states from the previous steps recorded in the &lt;code&gt;navHistory&lt;/code&gt;. By aggregating these states, we can maintain a comprehensive snapshot of the wizard's progression, facilitating smooth data management and validation processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then, let’s define few callbacks:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;setStepData&lt;/code&gt;: This callback allows clients to enhance the state of the wizard by updating the data associated with a specific step. For example, in our insurance purchase flow, the EnterCustomerDetailsComponent can capture input changes and update the wizard state accordingly, enabling users to progress further.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;navigateTo&lt;/code&gt;: Clients can use this callback to programmatically navigate to a specific step in the wizard. For instance, if the wizard is submitted with errors that need to be addressed in earlier steps, this callback can be invoked to jump back to those steps for correction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And finally we need to propagate these callbacks and some of our states to the step components. I will use React Context API to do this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;code&gt;WizardSteps&lt;/code&gt; will go through all transitions in navHistory, accumulate state collected on each step, and render the step that is active at the moment with this accumulated state:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Handling navigation
&lt;/h3&gt;

&lt;p&gt;Remember when building view layer, we left onNextClick and onBackClick methods not implemented? Let's fill this gap!&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Make requirements more complicated
&lt;/h3&gt;

&lt;p&gt;As our fake insurance company expands its business offerings, the CEO decides to extend insurance sales beyond individuals to include local businesses. Fortunately, the product types and coverage details remain unchanged. The only adjustment required is to the first step in the wizard, where business owners must enter details about their company. How can we accommodate this change with minimal alterations to our original architecture?&lt;/p&gt;

&lt;p&gt;The solution is surprisingly simple: we just need to replace the view mapped to the first step in the wizard. By updating the view component associated with the initial step, we can seamlessly incorporate the new requirement without fundamentally altering our existing architecture. This approach demonstrates the flexibility and scalability of our design, allowing for smooth adaptation to evolving business needs without necessitating extensive refactoring or restructuring.&lt;/p&gt;

&lt;p&gt;Here is how simple code change will look like:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h3&gt;
  
  
  Final words
&lt;/h3&gt;

&lt;p&gt;In this article, we’ve explored the concept of state machines in the context of building interactive wizards in React applications. We began by understanding the fundamentals of state machines and their relevance in managing complex user interfaces, particularly in scenarios like multi-step wizards.&lt;/p&gt;

&lt;p&gt;Then we delved into the implementation details, starting with the construction of a solid foundation using React for UI components and Tailwind CSS for styling. We discussed the importance of separating orchestration logic from rendering, which enables greater flexibility and reusability in our codebase.&lt;/p&gt;

&lt;p&gt;Next, we examined the state machine configuration interface, which serves as a blueprint for defining the wizard’s flow and behavior. By delineating the steps, their transitions, and associated views, we establish a clear structure for guiding users through the wizard.&lt;/p&gt;

&lt;p&gt;Building on this foundation, we outlined the orchestration logic, including managing navigation history, storing wizard states, and calculating aggregated step states. Through defined callbacks, we facilitated interaction with the wizard, ensuring a smooth and intuitive user experience.&lt;/p&gt;

&lt;p&gt;Finally, we addressed the scenario of evolving requirements, demonstrating how our architecture can adapt seamlessly to accommodate changes such as expanding the target audience to include local businesses.&lt;/p&gt;

&lt;p&gt;As you embark on your journey to implement state machines in React for your own projects, remember the principles outlined here: prioritize modularity, separation of concerns, and flexibility. By adhering to these principles, you can create robust, maintainable, and user-friendly wizard interfaces that stand the test of time.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/state-machines-in-react/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on March 19, 2024.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>statemachine</category>
      <category>react</category>
    </item>
    <item>
      <title>Micro-frontend Migration Journey — Part 3: Launch</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Tue, 11 Jul 2023 06:14:40 +0000</pubDate>
      <link>https://dev.to/srshifu/micro-frontend-migration-journey-part-3-launch-36c1</link>
      <guid>https://dev.to/srshifu/micro-frontend-migration-journey-part-3-launch-36c1</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_gkFZFpq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/849/0%2Al8iEFW-yUNmglBl4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_gkFZFpq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/849/0%2Al8iEFW-yUNmglBl4.jpg" alt="" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the third installment of the “Micro-frontend Migration Journey” series! In &lt;a href="https://dev.to/srshifu/micro-frontend-migration-journey-part-1-design-3o77"&gt;Part 1&lt;/a&gt;, we explored the fundamental concepts of micro-frontend architecture, as well as the strategies and high-level design implementations for migrating to a micro-frontend architecture. We discussed how breaking down monolithic applications into smaller, independent micro-frontends enables teams to work autonomously, improves development efficiency, and enhances user experience.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/srshifu/micro-frontend-migration-journey-part-2-toolkit-4bb7"&gt;Part 2&lt;/a&gt;, we dived into the implementation details of a micro-frontend architecture, focusing on the Micro-Frontend Toolkit. We discovered a collection of tools and libraries that simplify the development and integration of micro-frontends. The toolkit provided a set of APIs and utilities for dynamic orchestration, streamlined build processes, and shared configurations, empowering developers to create robust and maintainable frontend systems.&lt;/p&gt;

&lt;p&gt;Now, in Part 3 of our journey, titled “Micro-frontend Migration Journey — Part 3: Launch,” we will explore the final phase of the migration process: launching the micro-frontend architecture into production. We will delve into the crucial steps, considerations, and best practices for successfully deploying and managing micro-frontends in a live environment.&lt;/p&gt;

&lt;p&gt;Whether you are already on a micro-frontend migration journey or considering adopting this architectural approach, this article will provide valuable insights and guidance to help you navigate the launch phase and ensure a smooth transition to a micro-frontend ecosystem.&lt;/p&gt;

&lt;p&gt;Let’s embark on this final leg of our journey and uncover the key aspects of launching micro-frontends in a production environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  The finish line
&lt;/h3&gt;

&lt;p&gt;Before diving into the next steps, let’s go through our checklist:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;System Architecture: Ensure you have decided on the design for your system architecture, including how you will orchestrate your micro-frontends and the library you will use for this purpose, if applicable.&lt;/li&gt;
&lt;li&gt;Development Toolkit: Verify that your development toolkit is working effectively. Make sure you can bootstrap new micro-frontend apps with minimal effort. If you have already prepared an onboarding document to educate developers on this process, it’s fantastic!&lt;/li&gt;
&lt;li&gt;Pre-Prod Deployment: Confirm that you have a working prototype deployed to the Pre-Prod environment. This allows you to test and fine-tune your micro-frontends before moving to the production environment.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, it’s time to plan the rollout to the Production environment seamlessly, ensuring a smooth transition for end users. However, we can’t neglect our legacy system — it’s better to have both the legacy system and micro-frontend system deployed and available. We need to have an escape hatch in case of issues, allowing us to redirect users back to a safe place. This requires making some important design decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Migration Strategy
&lt;/h3&gt;

&lt;p&gt;In general, there are two main approaches to implement the migration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Complete Rewrite&lt;/strong&gt; : This involves a complete code freeze of your legacy app. It carries some risk as it may leave product features stale for a period of time. However, once the rewrite is done, it becomes less cumbersome to move forward.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strangler Pattern&lt;/strong&gt; : Prioritize migration for business-critical parts of the legacy app. This approach allows you to provide value incremen, releasing frequent updates and monitoring progress carefully. It minimizes the chances of freezing feature development and reduces the risks of architectural errors. The Strangler Pattern is particularly useful for developers to evaluate the effectiveness of the initial micro-frontend releases and make any necessary adjustments.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you still need to continue feature work along the architecture migration, it is still achievable but requires a few considerations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep new micro-frontends modular. Encapsulate business functionality into a separate package, whether it’s in a monorepo or a multi-repo setup. External consumers, such as the micro-frontend shell and legacy system, should only be concerned with integration points. Note that some micro-frontend orchestration libraries require applications to implement special contracts for integration, such as the lifecycle &lt;a href="https://single-spa.js.org/docs/building-applications#registered-application-lifecycle"&gt;interface&lt;/a&gt; of single-spa library. In this case, you might either have these APIs exported along with the main entry component for this app or have 2 different entries produced for different type of integrations. Completing this encapsulation early on will unblock your teammates to start working on features while you (or migration team) focus on integration concerns.&lt;/li&gt;
&lt;li&gt;After encapsulating micro-frontend apps, you need to consume them in the legacy system as build-time dependencies. This can be done via static imports or lazy loading using dynamic imports. Your legacy system must be context-aware — in the micro-frontend ecosystem, it needs to avoid bundling/lazy-loading this dependency.&lt;/li&gt;
&lt;li&gt;Now, you have an important choice to make. What will happen to the legacy app when the migration is complete (assuming you chose the strangler migration approach)? Will it be completely eliminated, or will it become an application container (shell) focused solely on the orchestration and bootstrapping of other apps? In either case, your legacy app needs to produce bundles that work for both ecosystems (monolith vs. micro-frontend). The simplest approach is to have the legacy bundler output two bundles. You can have separate pipelines that invoke different commands for building (e.g., &lt;code&gt;build&lt;/code&gt; vs. &lt;code&gt;build:mfe&lt;/code&gt;), or you can use a single pipeline if you can tolerate longer build times.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end of the day, your traffic routing picture might go through the following transformations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intitial state after migration:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nCEsIpMS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AAHeOvaSmPXrsd2FB.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nCEsIpMS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AAHeOvaSmPXrsd2FB.jpg" alt="" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Team migrates another app into micro-frontend:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hYYsiaPd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AIhX1W4X610nDtseg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hYYsiaPd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AIhX1W4X610nDtseg.png" alt="" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Team is confident with new architecture (legacy system is deprecated):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NgoXnA81--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AMQ9DwS3yqDMsiU3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NgoXnA81--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AMQ9DwS3yqDMsiU3k.png" alt="" width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Legacy app has been completely decoupled (strangler migration completed):&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VgQShRfR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Au7pyjtvnFEIJAuw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VgQShRfR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Au7pyjtvnFEIJAuw6.png" alt="" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By considering these migration strategies, you can make informed decisions on how to proceed with the transition to a micro-frontend architecture in the Production environment. Additionally, you may have noticed the presence of a mysterious router component in our architecture. In the next section, we will dive deeper into the various ways you can implement the router and explore the options available to seamlessly navigate between legacy system and micro-frontends.&lt;/p&gt;

&lt;h3&gt;
  
  
  Router
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, it is important to provide users with an “escape hatch” in case of any issues or misconfigurations within the new micro-frontend architecture. This can be achieved through an application-level router that handles redirection to the old system or provides alternative navigation options.&lt;/p&gt;

&lt;p&gt;The first question to address is where this router should live:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the client side. By utilizing the micro-frontend loader (see &lt;a href="https://dev.to/srshifu/micro-frontend-migration-journey-part-2-toolkit-4bb7"&gt;Part 2&lt;/a&gt;), you can implement a strategy that falls back to the legacy bundle if the metrics of the micro-frontend do not meet service-level objectives (max download latency threshold, max retries to download, etc).&lt;/li&gt;
&lt;li&gt;On the server side. If you have a frontend server, API Gateway, or serverless function in your infrastructure, you can leverage them to handle the routing logic. This enables more centralized control and flexibility in managing the redirection process.&lt;/li&gt;
&lt;li&gt;On the edge. In cases where you employ edge-side composition of your micro-frontends (as explored in &lt;a href="https://dev.to/srshifu/micro-frontend-migration-journey-part-1-design-3o77"&gt;Part 1&lt;/a&gt;), you can utilize the same infrastructure to handle the routing. This approach allows for efficient and scalable routing at the edge, ensuring optimal performance and minimal latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, you need to determine the routing strategy for your micro-frontend architecture. Here are some options to consider:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Direct URL. This is the simplest approach. By default, incoming traffic will be redirected to the page fetching the micro-frontend shell script. You can reserve a special URL path that allows users to access the old legacy system. This pattern is called Multi-Page Architecture (traditional way web developers used prior to SPA era). In the user interface, you can implement a banner or notification to announce the change and provide a link to the old system. Additionally, the UI can offer an option to remember the user’s choice by persisting it in client storage (e.g., local storage) or a backend database. In the diagrams below I will use generalized “Server” word to aggregate all types of server infrastructures (SSR server, API Gateway, serverless, etc).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FMc0s5wQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AYBXI0eX6IMcfb2fY.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FMc0s5wQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AYBXI0eX6IMcfb2fY.png" alt="" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Cookies. Cookies have been a longstanding method for persisting data on the client and sharing it with the server. You can utilize this shared knowledge to implement your routing logic. By setting a specific cookie value, you can control the routing behavior of your micro-frontend system. For example, you can use a cookie to store the user’s preference for accessing the new system or the old legacy system. Based on the cookie value, you can redirect the user accordingly and ensure a seamless transition between the two systems.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aQZhH7L7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AbNy3DSm6aAsyj8C9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aQZhH7L7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AbNy3DSm6aAsyj8C9.png" alt="" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Traffic Splitting. This approach is similar to A/B testing. You can provide access to the new micro-frontend system only to a fraction of users. By using the user’s ID or IP address as a distinguishing factor, you can generate a hash from it. Then, by normalizing the hash value (taking the modulo), if the resulting value is less than a certain threshold (let’s say X, representing the percentage of users you want to grant access to), you can redirect them to the new system. For the remaining users, they will continue to access the legacy system. It’s important to note that this approach doesn’t provide an option for the selected users to revert back to the old system, so it may need to be combined with previous routing strategies to accommodate all user scenarios.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--w24e5apa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AMLtX0Ntlz5cwBYgI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--w24e5apa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AMLtX0Ntlz5cwBYgI.png" alt="" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Circuit Breaker: The circuit breaker approach is a data-driven strategy that requires centralized metrics collection from clients. These metrics are then used to determine the appropriate routing for future user requests. It is important to be cautious when making on-demand requests to get these metrics, as this can impact page load performance. Instead, consider caching aggregated metrics at regular intervals and updating them in the background. Metrics such as the ratio of failed requests to total requests, average page load time, and infrastructure availability (CDN, blob storage) can be used. While it is possible to implement this approach client-side (using metrics collected on field for each individual client), it can be challenging to make decisions without a holistic view of the entire system.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Is that all?
&lt;/h3&gt;

&lt;p&gt;No! Many people consider deploying the system to be the last step in their launch plan. But this is wrong! Micro-frontend is very dynamic architecture and has a lot of moving parts that needs to be integrated with each other and live in harmony. It is crucial to have observability that will provide us, developers, visibility into how system is functioning in Production (and not only on our local machine!). Collecting metrics from clients allows for a comprehensive view of unhandled and handled errors, stack traces, and &lt;a href="https://web.dev/vitals/"&gt;web vitals&lt;/a&gt;. It is also important to provide isolated metrics views and alarms for each micro-frontend app, enabling the owning team to respond quickly to any issues. Implementing canary End-to-End tests in the production and pre-production environments, simulating user journeys, will help to identify potential integration issues early on.&lt;/p&gt;

&lt;p&gt;Indeed, observability is a critical aspect that should be considered early in the process of building a micro-frontend architecture. In a distributed architecture, relying solely on customer feedback to identify and address issues is not sufficient. By implementing observability tools and practices, developers can proactively monitor the system, detect potential issues, and resolve them before customers even notice. Observability allows for real-time visibility into the performance, health, and behavior of the micro-frontends, enabling developers to identify and address issues quickly. By taking a proactive approach to observability, teams can deliver a high-quality user experience and ensure the smooth functioning of the micro-frontend architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing notes
&lt;/h3&gt;

&lt;p&gt;In this article series, we have explored the journey of migrating to a micro-frontend architecture. We started by understanding the motivations behind adopting micro-frontends and the benefits they offer in terms of scalability, maintainability, and independent development. We then delved into the implementation details, discussing key concepts, tools, and strategies for successfully transitioning to a micro-frontend architecture.&lt;/p&gt;

&lt;p&gt;From designing the system architecture and building the micro-frontend development toolkit to managing the deployment process and implementing effective routing strategies, we have covered important aspects of the migration journey. We also highlighted the significance of observability and the role it plays in ensuring the smooth operation of the micro-frontend architecture.&lt;/p&gt;

&lt;p&gt;By embracing micro-frontends and following the best practices outlined in this article series, development teams can create modular, scalable, and resilient frontend systems. Micro-frontends enable teams to work independently, leverage different technologies, and deliver features more rapidly. With careful planning, thoughtful design, and a focus on observability, organizations can successfully navigate the migration to a micro-frontend architecture and unlock the full potential of this powerful approach.&lt;/p&gt;

&lt;p&gt;Embrace the future of frontend development with micro-frontends and embark on your own migration journey to build robust, flexible, and user-centric applications.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/micro-frontend-migration-journey-part-3/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on July 11, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>frontendarchitecture</category>
      <category>microfrontends</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Micro-frontend Migration Journey — Part 2: Toolkit</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Tue, 27 Jun 2023 00:00:38 +0000</pubDate>
      <link>https://dev.to/srshifu/micro-frontend-migration-journey-part-2-toolkit-4bb7</link>
      <guid>https://dev.to/srshifu/micro-frontend-migration-journey-part-2-toolkit-4bb7</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oy9SOHBu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Ajujr1Rzl0RrS0gq0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oy9SOHBu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2Ajujr1Rzl0RrS0gq0.jpg" alt="" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to Part 2 of the Micro-frontend Migration Journey series! In the previous &lt;a href="https://dev.to/srshifu/micro-frontend-migration-journey-part-1-design-3o77"&gt;part&lt;/a&gt;, we discussed the strategies and high-level design implementations for migrating to a micro-frontend architecture. We also explored different frameworks we can use for client-side orchestration. Now, it’s time to take the next step on our journey and focus on building the toolkit that will support our migration and future micro-frontend endeavors.&lt;/p&gt;

&lt;p&gt;Creating a robust toolkit is crucial for a successful migration of existing apps and the smooth adoption of new micro-frontends in the future. In this article, we will dive into building an opinionated and batteries-included toolset for efficient bootstrapping and enhancement of micro-frontend architecture. From bundlers and module loaders to testing frameworks and build pipelines, we will explore the tools and technologies that will empower you to embrace the micro-frontend paradigm effectively.&lt;/p&gt;

&lt;p&gt;(Note: As in the previous article, please be aware that while I share my personal experiences, I am not able to disclose any proprietary or internal details of tools, technologies, or specific processes. The focus will be on general concepts and strategies to provide actionable insights.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment kit utility
&lt;/h3&gt;

&lt;p&gt;To enhance deployability and isolation, it is essential for every micro-frontend application to deploy its asset bundles through its own pipeline. As we explored in &lt;a href="https://dev.to/srshifu/micro-frontend-migration-journey-part-1-design-3o77"&gt;Part 1&lt;/a&gt; of this article series, each app must produce a build with a unified format that the deployment pipeline can comprehend. To streamline this process and minimize code duplication, we require a library that provides an all-in-one solution, exposing a single API for developers to utilize.&lt;/p&gt;

&lt;p&gt;I have previously discussed the benefits of employing a declarative Infrastructure-as-Code (IaC) approach to manage and provision system infrastructure through definition files. AWS CDK can be leveraged to define the components of our deployment pipelines.&lt;/p&gt;

&lt;p&gt;Below is a minimal interface that our utility can expose:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export interface PipelineProps {
    app: App;
    pipeline: DeploymentPipeline;
    packageName: string;
    bucketPrefix: string;
    artifactsRoot: string;
}

export type buildPipeline = (props: PipelineProps) =&amp;gt; void;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;app &lt;/strong&gt; — reference to CDK application&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pipeline &lt;/strong&gt; — definition of CDK pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;packageName &lt;/strong&gt; — source package name of micro-frontend app&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;bucketName &lt;/strong&gt; — name of the S3 bucket where app bundle should be deployed to&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;artifactsRoot &lt;/strong&gt; — root folder in the source package to discover the bundle&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;buildPipeline&lt;/code&gt; function can create a &lt;code&gt;MicrofrontendStack&lt;/code&gt; that performs the following tasks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class MicrofrontendStack extends DeploymentStack {
   constructor(parent: App, id: string, env: DeploymentEnvironment, props: MicrofrontendStackProps) {
         super(...);
         const bucket = this.createSecureS3Bucket(useS3PublicRead, bucketName);
         const artifacts = this.pullArtifacts(packageName, artifactsRoot);
         const originPath = this.deployArtifacts(bucket, artifacts, shouldCompressAssets);
         this.createCloudFrontDistribution(bucket, originPath);
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s examine the steps involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We create a secure S3 bucket with no public read access and CORS rules that only allow access from the &lt;code&gt;*.amazon.com&lt;/code&gt; domain, where our Amazon CloudFront origin will reside. We can also define lifecycle rules for the bucket to retain only the last N deployments (the number of versions of the manifest file plus the number of directories for static assets).&lt;/li&gt;
&lt;li&gt;We retrieve artifacts from the &lt;code&gt;artifactsRoot&lt;/code&gt;, which represents the build directory containing the &lt;code&gt;manifest.json&lt;/code&gt; file and the folder with static assets.&lt;/li&gt;
&lt;li&gt;We deploy the artifacts to the S3 bucket. We need to create two instances of &lt;code&gt;BucketDeployment&lt;/code&gt;: one for deploying the &lt;code&gt;manifest.json&lt;/code&gt; file and another for deploying the directory with the relevant assets. It is crucial to define different caching strategies for each of them. The manifest file should never be cached, while the assets prefix can have a meaningful &lt;code&gt;max-age&lt;/code&gt; cache. Don't forget to enable versioning in the S3 bucket, as the manifest file will always be located in the root of the bucket.&lt;/li&gt;
&lt;li&gt;Lastly, we create a CloudFront distribution that targets the S3 bucket location specified by the &lt;code&gt;originPath&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Imagine the simplicity and convenience of creating pipelines for every app in your micro-frontend architecture. With our toolkit library, all you need to do is call the buildPipeline API, and the rest is taken care of. It's that straightforward!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;buildPipeline({
    app,
    pipeline,
    packageName: 'PaymentsAssets',
    bucketName: 'payment-app-assets',
    artifactsRoot: 'dist'
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Gone are the days of manually configuring and setting up deployment pipelines for each micro-frontend application. Our utility library empowers developers to streamline the process and reduce repetitive tasks. By abstracting away the complexities, you can focus on what matters most: building exceptional micro-frontends.&lt;/p&gt;

&lt;h3&gt;
  
  
  Micro-frontend loader
&lt;/h3&gt;

&lt;p&gt;The micro-frontend loader plays a vital role in the micro-frontend ecosystem. It is responsible for the dynamic downloading and bootstrapping of distributed applications within the browser’s runtime. This utility exposes a single API that can be utilized by any micro-frontend orchestration library, such as single-spa, to resolve references to target applications.&lt;/p&gt;

&lt;p&gt;Here is a simplified implementation of the API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const lifeCyclesCache= = {};

export const loadMicroFrontend = (
    microfrontendKey,
    originPath,
    entryFileName
) =&amp;gt; {
    const cacheKey = `${microfrontendKey}/${entryFileName}`;
    if(lifeCyclesCache[cacheKey]) return lifeCyclesCache[cacheKey];

   lifeCyclesCache[cacheKey] =
        downloadBundle(microfrontendKey, originPath, entryFileName);
    return lifeCyclesCache[cacheKey];
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Inputs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;microfrontendKey&lt;/code&gt; is a unique identifier for the application, used for registering it in the global window scope (more on this in the next section).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;originPath&lt;/code&gt; is the base URL to access the application's manifest file (typically the CloudFront origin URL).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;entryFileName&lt;/code&gt; is the path to the main entry file of the application (e.g., &lt;code&gt;index.js&lt;/code&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main logic resides within the &lt;code&gt;downloadBundle&lt;/code&gt; method:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;If the application bundle has been loaded before, no action is required. The loader will retrieve it from the global window scope.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Otherwise, it attempts to discover the corresponding manifest file. There are two scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the application has been loaded before, the manifest metadata will be stored in the cache (browser memory or local storage). If found, the loader uses the “stale-while-revalidate” technique: it sends a network request to fetch the latest manifest file from the CDN while passing the entry file location from the manifest metadata downstream. This step ensures resilience to failures, as even in the worst-case scenario, the user will still use the previous version of the bundle, and the next page refresh will resolve the latest manifest.&lt;/li&gt;
&lt;li&gt;If it’s the first load of the application (no manifest in the cache), the loader attempts a network request to the CDN to fetch it. If successful, it saves the manifest to the cache and the browser’s local storage with a meaningful TTL (time to live) value. If unsuccessful, the load fails.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download bundle. Loader will concatenate &lt;code&gt;originPath&lt;/code&gt; and entry file path name received from the manifest to be used as a source for &lt;code&gt;script&lt;/code&gt; HTML tag that will download the bundle:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const loadScript = (originPath, manifest, entryFileName) =&amp;gt; {
    return new Promise((resolve, reject) =&amp;gt; {
        const scriptTag = document.createElement('script');
        const src = `${originPath}/${manifest[entryFileName]}`;
        scriptTag.async = true;
        scriptTag.type = 'text/javascript';
        scriptTag.crossOrigin = 'anonymous';
        scriptTag.onerror = () =&amp;gt; {
            reject(`Failed to load ${src}`);
        };
        scriptTag.onload = () =&amp;gt; {
            const bundle = window[manifest.microfrontendKey][entryFileName];
            resolve(bundle);
        };
        document.body.appendChild(scriptTag);
        scriptTag.src = src;
    });
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s an example of how this loader can be used in conjunction with single-spa library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {registerApplication} from 'single-spa';
import {loadMicroFrontend, PAYMENT_APP_KEY, ORDERS_APP_KEY} from 'microfrontend-sdk';

registerApplication(
        `${ORDERS_APP_KEY}-app`,
        () =&amp;gt; loadMicroFrontend(ORDERS_APP_KEY, getOriginURL(ORDERS_APP_KEY), 'index.js').toPromise(),
        (location) =&amp;gt; /\/orders.*/.test(location.pathname),
        {
            domElementGetter: () =&amp;gt; document.getElementById('spa-placeholder')
        });

 registerApplication(
        `${PAYMENT_APP_KEY}-app`,
        () =&amp;gt; loadMicroFrontend(PAYMENT_APP_KEY, getOriginURL(PAYMENT_APP_KEY), 'app.js').toPromise(),
        (location) =&amp;gt; /\/payments.*/.test(location.pathname),
        {
            domElementGetter: () =&amp;gt; document.getElementById('app-placeholder')
        });

 registerApplication(
        `${PAYMENT_APP_KEY}-alt-app`,
        () =&amp;gt; loadMicroFrontend(PAYMENT_APP_KEY, getOriginURL(PAYMENT_APP_KEY), 'alt.app.js').toPromise(),
        (location) =&amp;gt; /\/alt/payments.*/.test(location.pathname),
        {
            domElementGetter: () =&amp;gt; document.getElementById('app-placeholder')
        });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we demonstrate the combined usage of the micro-frontend loader and the single-spa library. By invoking the &lt;code&gt;registerApplication&lt;/code&gt; function, we register three applications (one entry for orders app and two entries for payments app). To trigger the loading process for each micro-frontend, we make use of the &lt;code&gt;loadMicroFrontend&lt;/code&gt; function, passing the appropriate parameters including the &lt;code&gt;microfrontendKey&lt;/code&gt;, &lt;code&gt;originPath&lt;/code&gt;, and &lt;code&gt;entryFileName&lt;/code&gt;. The loader ensures the dynamic loading and bootstrapping of the micro-frontends based on the specified conditions.&lt;/p&gt;

&lt;p&gt;The micro-frontend loader greatly simplifies the process of integrating micro-frontends into our application. It offers a unified API that resolves application references and manages the download and bootstrap operations for the required bundles. Although the &lt;code&gt;loadMicroFrontend&lt;/code&gt; API is primarily used within the container (shell) application, it is crucial to share the micro-frontend keys among the tenant applications living in the container. This enables the app bundlers to expose the individual apps to the global window scope properly, facilitating seamless access and retrieval of bundles by the loader.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bundler
&lt;/h3&gt;

&lt;p&gt;To ensure a unified build process across all micro-frontends within the container application, it is essential to have a shared configuration that every app can import and enhance as needed. Here is an example of a minimalistic Webpack configuration that can be easily shared:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = ({vendorVersion}) =&amp;gt; {
   const {exclude, include, dependencies} = getVendorConfigByVersion(vendorVersion);
  return {
    externals: [
      dependencies.externals,
      function (_, request, callback) {
        if (exclude &amp;amp;&amp;amp; checkIfPathMatches(request, exclude) || include &amp;amp;&amp;amp; !checkIfPathMatches(request, include)) {
          return callback();
        }
        const pattern = dependencies.patterns?.find(({ regex }) =&amp;gt; regex.test(request));
        if (pattern) {
          const exposedImport = pattern.handler(request);
          return callback(null, {
            root: exposedImport,
            commonjs: exposedImport,
            commonjs2: exposedImport,
            amd: exposedImport,
          });
        }
        callback();
      },
    ],
  };
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration allows us to control the versioning of dependencies, enabling each app to have its own vendor bundle. It caters to various use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some apps may use different UI rendering frameworks, such as Angular or React, with their own set of transitional dependencies (this is one of the beauties of having micro-frontend architecture). For example:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ 
   'react-1.0': {
      externals: {
        "react": "react",
        "react-dom": "reactDom"
      }
   },
  'angular-1.0': {
      patterns: [{
        regex: /^@angular\//,
        handler(path) {
           return ['ng', camelCase(path.replace(/^@angular\//, ''))]
        }
     }]
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Suppose all your apps use React.js, but you want to use the latest version in a newly created micro-frontend app. You can define the following configuration:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ 
   'react-16.0': {
      externals: {
        "react": "react",
        "react-dom": "reactDom"
      }
   },
   'react-18.0': {
      externals: {
        "react": "react@18",
        "react-dom": "reactDom@18"
      }
   },
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, managing this config might becomes tricky if you want to include another library having React as its externalized dependency (let’s say UI components library) — React will not be happy when running 2 different versions in the same app. If you have control over the library, it is possible to create a new version that aligns with the desired dependencies. But in cases where the UI library is owned by a different team or organization (e.g., open-source), you might need to ensure this library exposes a build that does not have React imports externalized.&lt;/p&gt;

&lt;p&gt;Additionally, the shared Webpack config can include other features such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A plugin to generate a manifest file and unified output. The &lt;code&gt;appName&lt;/code&gt;, which represents the micro-frontend key mentioned earlier, allows direct access to each micro-frontend app via the window scope (e.g., &lt;code&gt;window.PaymentsApp.index&lt;/code&gt;). Having this quick lookup mechanism will help our micro-frontend loader to resolve app assets without need to do network roundtrips.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { WebpackManifestPlugin } = require('webpack-manifest-plugin');

return {
   entry: {
     index: path.join(sourcePath, `index.tsx`)
   },
   output: {
      libraryTarget: 'umd',
      library: [`${appName}`, '[name]'],
      filename: '[hash]/[name].js',
      path: 'dist',
   },

   plugins: [      
      new WebpackManifestPlugin({
        fileName: 'manifest.json',
        seed: Date.now(),
        publicPath: publicPath,
      }),

   ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;A plugin to generate an import map for vendor dependencies. While this &lt;a href="https://github.com/tipalti/webpack-import-map-plugin"&gt;example&lt;/a&gt; is provided for inspiration, it may require a custom plugin to handle bundle versioning effectively, especially when dealing with import-map scopes. For the use case when you might have to maintain multiple versions of React (see example above), import-map configuration might look like this:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "imports": {
    "react": "https://unpkg.com/react@16/react.production.min.js",
    "react@16": "https://unpkg.com/react@16/react.production.min.js",
    "react@18": "https://unpkg.com/react@18/react.production.min.js"
  }
}

// the same example using the scopes
{
  "imports": {
    "react": "https://unpkg.com/react@16/react.production.min.js"
  },
  "scopes": {
   // activated when trying to resolve react external dependency from https://mywebsite.com/my-new-unicorn-app URL
    "/my-new-unicorn-app/": { 
      "react": "https://unpkg.com/react@18/react.production.min.js"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;A shared set of rules to handle different file types such as CSS, SCSS, JS, TS, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ideally, the provided configuration should require minimal enhancement by the consumer. This ensures that every tenant in your micro-frontend architecture follows the same build pattern, promoting consistency and simplifying maintenance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Dev Server
&lt;/h3&gt;

&lt;p&gt;Even though you might never need to run more than one app on your local machine, sometimes you might need to ensure that cross-app integration is working as expected before deploying it to Pre-Prod and Prod environments. One of the options you have is to run every app in its own terminal but it might be not the best developer experience (I call this a “command hell” — when you need to remember which commands to use to launch a specific app). What you can do instead is to have CLI commands that will start micro-frontends based on the configuration.&lt;/p&gt;

&lt;p&gt;Here is a simplified example of how it can be done using &lt;code&gt;webpack&lt;/code&gt; CLI and &lt;code&gt;express&lt;/code&gt; middleware:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;function startApp(config) {
    const compiler = webpack(config);

    // https://github.com/webpack-contrib/webpack-hot-middleware
    app.use(
        webpackDevMiddleware(compiler, {
            publicPath: config.output.publicPath,
        })
    );

    // 
    app.use(
        webpackHotMiddleware(compiler, {
             name: config.name,
             path: `/${config.name}_hot`,
             heartbeat: config.updateFreq || 2000,
         })
    );
}

function start(config) {
    const { port, containerConfig, apps } = config;
    const app = express();
    // start container
    startApp(containerConfig);

    // start micro-apps you need
    apps.forEach(app =&amp;gt; {
         // here you might want to resolve the config dynamically based on the app directory and fallback to some defaults
         const appConfig = resolveWebpackConfig(app);
         startApp(appConfig);
    });

     // add more middlewares you want

    // this will start HTTP server listening on port you provided (investigate how to do HTTPS)
    app.listen(port, () =&amp;gt; {
        console.log('Started');
    });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Shared configuration
&lt;/h3&gt;

&lt;p&gt;In your micro-frontend architecture, it may be advantageous to provide shared configuration options that teams can leverage as best practice sources. While this is optional and depends on your organizational structure, it can promote consistency across the system. Here are some examples of shared configuration options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browserlist config: this config I highly recommend having as a mandatory config shared with all tenants in your architecture since for the end user your system is a single look-and-feel UI and it would make sense to have this UI support the same list of browsers. You can look at it as some sort of SLA your users should be aware of (see an example for &lt;a href="https://repost.aws/knowledge-center/browsers-management-console"&gt;AWS Console&lt;/a&gt;). Here is the link to the tutorial on how to create a shareable browserlist config: &lt;a href="https://github.com/browserslist/browserslist#shareable-configs"&gt;https://github.com/browserslist/browserslist#shareable-configs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;ESLint Config: Sharing an ESLint config helps maintain code consistency and reduces the likelihood of bugs. You can create your own shareable ESLint config that teams can use as a baseline. Refer to the ESLint documentation for instructions on creating and using shareable configs: &lt;a href="https://eslint.org/docs/latest/extend/shareable-configs"&gt;https://eslint.org/docs/latest/extend/shareable-configs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Prettier Config: To ensure consistent code formatting across different projects, a shared Prettier config can be provided. This helps maintain a unified style across the codebase.&lt;/li&gt;
&lt;li&gt;Jest Config: For testing standards, you can define a shared Jest config. Jest has recently introduced project-level &lt;a href="https://jestjs.io/docs/configuration#projects-arraystring--projectconfig"&gt;configuration&lt;/a&gt;, which is especially useful if you are using a monorepo for your code. This allows you to define different runner configurations for each app.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In Part 2 of this article, we have explored the implementation details of a micro-frontend architecture and discussed the key components and tools involved. The Micro-Frontend Toolkit, with its comprehensive set of APIs and utilities, simplifies the development and integration of micro-frontends. By leveraging the toolkit, developers can efficiently orchestrate and manage their micro-frontends, ensuring a seamless user experience and enabling independent development and deployment.&lt;/p&gt;

&lt;p&gt;The micro-frontend loader, a vital component of the architecture, handles the downloading and bootstrapping of distributed applications in the browser’s runtime. Its caching mechanisms, network request strategies, and resilience to failures contribute to optimized loading and enhanced reliability. This results in improved performance and a robust user interface.&lt;/p&gt;

&lt;p&gt;The bundler, exemplified through the Webpack configuration, provides a shared build process for all micro-frontends. It allows for efficient versioning of dependencies, controls the externalization of libraries, and generates manifest files and import maps. This standardized approach streamlines the development workflow, promotes consistency, and facilitates maintenance across multiple micro-frontends.&lt;/p&gt;

&lt;p&gt;Furthermore, we highlighted the importance of shared configurations in a micro-frontend architecture. By establishing shared configurations such as Browserlist, ESLint, Prettier, and Jest, organizations can enforce coding standards, ensure consistent code formatting, and enhance testing practices. These shared configurations contribute to code quality, collaboration, and maintainability.&lt;/p&gt;

&lt;p&gt;Finally, we discussed the local development CLI (distributed dev server), which provides a convenient and efficient way to run and test micro-frontends during local development. By utilizing CLI commands, developers can easily start and manage individual micro-frontends, simplifying the testing and integration process.&lt;/p&gt;

&lt;p&gt;By leveraging these tools, utilities, and shared configurations, organizations can successfully implement and manage a micro-frontend architecture. The modular and scalable nature of micro-frontends, combined with the capabilities offered by the Micro-Frontend Toolkit, empowers development teams to build complex frontend systems with greater flexibility, maintainability, and autonomy.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/micro-frontend-migration-journey-part-2/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on June 27, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>frontendarchitecture</category>
      <category>frontend</category>
      <category>microfrontends</category>
    </item>
    <item>
      <title>Micro-frontend Migration Journey — Part 1: Design</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Thu, 25 May 2023 19:53:51 +0000</pubDate>
      <link>https://dev.to/srshifu/micro-frontend-migration-journey-part-1-design-3o77</link>
      <guid>https://dev.to/srshifu/micro-frontend-migration-journey-part-1-design-3o77</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QDBDuhYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hz0o4nozmgphitaceyej.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QDBDuhYw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hz0o4nozmgphitaceyej.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today’s fast-paced digital world, where agility and scalability are crucial, businesses are constantly seeking ways to improve the performance and maintainability of their web applications. One popular approach to achieving these goals is migrating from a monolithic architecture to a distributed one (or micro-frontend). This article series, “Micro-frontend Migration Journey,” shares my personal experience of undertaking such a migration during my time at AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DISCLAIMER&lt;/strong&gt; : Before we begin, it’s important to note that while this article shares my personal experience, I am not able to disclose any proprietary or internal details of tools, technologies, or specific processes at AWS or any other organization. I am committed to respecting legal obligations and ensuring that this article focuses solely on the general concepts and strategies involved in the micro-frontend migration journey. The purpose is to provide insights and lessons learned that can be applicable in a broader context, without divulging any confidential information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Motivation for Migration
&lt;/h3&gt;

&lt;p&gt;I learned about micro-frontends (I guess as many of you) from the &lt;a href="https://martinfowler.com/articles/micro-frontends.html"&gt;article&lt;/a&gt; on Martin Fowler’s blog. It presented different ways of composing micro-frontend architecture in a framework-agnostic manner. As I delved deeper into the subject, I realized that our existing monolithic architecture was becoming a significant bottleneck for our team’s productivity and impeding the overall performance of our application.&lt;/p&gt;

&lt;p&gt;One of the key factors that pushed me towards considering a migration was the increasing bundle size of our application. After conducting a thorough bundle analysis in the summer of 2020, I discovered that since its initial launch in early 2019, the bundle size (gzipped) had grown from 450KB to 800KB (it is almost 4MB parsed) -almost twice the original size. Considering the success of our service and predicting its continued growth, it was clear that this trend would persist, further impacting the performance and maintainability of our application.&lt;/p&gt;

&lt;p&gt;While I was enthusiastic about the concept of micro-frontends, I also recognized that we were not yet ready to adopt them due to specific challenges we faced:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Small Organizational Structure: At the time of my analysis, our organization was relatively small, and I was the only full-time frontend engineer on the team. Migrating to a micro-frontend architecture required a significant investment in terms of organizational structure and operational foundation. It was crucial to have a mature structure that could effectively handle the distributed architecture and reflect the dependencies between different frontend components.&lt;/li&gt;
&lt;li&gt;Limited Business Domain: Although micro-frontends can be split based on bounded contexts and business capabilities (learn more in the “&lt;a href="https://thesametech.com/domain-driven-design-in-micro-frontend-architecture/"&gt;Domain-Driven Design in micro-frontend architecture”&lt;/a&gt; post) our core business domain was not extensive enough to justify a complete decoupling into multiple micro-frontends. However, there were visible boundaries within the application that made sense to carve out and transition to a distributed architecture.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Considering these factors, I realized that a gradual approach was necessary. Rather than a complete migration to micro-frontends, I aimed to identify specific areas within our application that could benefit from a distributed architecture. This would allow us to address performance and scalability concerns without disrupting the overall organizational structure or compromising the integrity of our business domain. It also would give us some time to grow the team and observe business directions.&lt;/p&gt;

&lt;p&gt;Please note that if you want to tackle app’s performance (bundle size) problem only via using mciro-frontend architecture, it might be not the best idea. It would be better to start with distributed monolith architecture that will leverage lazy loading (dynamic imports) instead. Moreover, I think it would handle bundle size issue more gracefully than micro-frontend architecture considering that micro-frontend architecture is very likely to have some shared code that would not be separated into vendor chunks and it would be built into the application bundle (that’s one of the cons of such distributed architecture — you need to have a trade-off between what to share, when and how). However, distributed monolith architecture will not scale as well as micro-frontend. When your organization grows fast, your team will likely grow at the same pace too. There would be an essential need to split the code base into different areas of ownership controlled by different teams. And each team will need to have their own release cycles that are independent of others, each team will appreciate if their code base would be focused purely on their domain, and will build fast (code isolation -&amp;gt; better maintainability/less code to maintain and build -&amp;gt; better testability/less test to maintain and execute).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Start
&lt;/h3&gt;

&lt;p&gt;To garner support from leadership, I crafted a persuasive technical vision document that encompassed a comprehensive performance analysis, including web vital metrics, and outlined the various phases of the migration towards distributed frontends. One of the intermediate phases of this migration was to establish a distributed monolith architecture, where multiple modules/widgets could be delivered asynchronously via lazy-loading techniques while leveraging shared infrastructure, such as an S3 bucket and CDN, between the core service and the widgets. As I outlined in my previous &lt;a href="https://thesametech.com/writing-efficient-frontend-design-documents/"&gt;article&lt;/a&gt;, the main idea of this type of document is to describe the future as you’d like it to be once the objectives have been achieved and the biggest problems are solved. It’s not about the execution plan!&lt;/p&gt;

&lt;p&gt;Almost 1 year later, the time had finally come to put my micro-frontend migration plan into action. With the impending expansion into a new domain and a larger team at our disposal, we were well-equipped to execute the migration. It felt like a golden opportunity that we couldn’t afford to miss. After all, remaining confined to the monolithic architecture would mean perpetually grappling with its limitations. The limited timeline to expand into a new domain served as a catalyst, propelling us toward building a more scalable and maintainable architecture right away instead of having short and slow iterations!&lt;/p&gt;

&lt;p&gt;To execute the migration and simultaneously handle the work in the new domain, we divided the teams into two dedicated groups. The feature work, which had higher priority, required more resources and needed to iterate at a faster pace. To ensure the integrity and comprehensive understanding of the migration process, it made sense to assign a small dedicated team specifically responsible for handling the migration. However, we couldn’t proceed with the feature work without first ensuring that the micro-frontend concept would prove successful.&lt;/p&gt;

&lt;p&gt;To mitigate risks and provide a clear roadmap, it was crucial to create a low-level design document that included precise estimates and a thorough risk assessment. This document served as a blueprint, outlining the necessary steps and considerations for the migration. The pivotal milestone in this process was the development of a proof-of-concept that would demonstrate the successful integration of all components according to the design. This milestone, aptly named the “Point of no return,” aimed to validate the feasibility and effectiveness of the micro-frontend architecture. While I was optimistic about the success of the migration, it was essential to prepare for contingencies. Consequently, I devised a Plan B, which acted as a backup strategy in case the initial concept didn’t yield the desired results. This included allocating an additional seven days in the estimates specifically to have me crying into the pillow plus a few days to have a new feature module entry connected to the core via lazy-loading (remember distributed monolith?).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Design
&lt;/h3&gt;

&lt;p&gt;When designing micro-frontends, there are generally 3 approaches for composition, each focusing on where the runtime app resolution takes place. The beauty of these approaches is that they are not mutually exclusive and can be combined as needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  Server-side composition
&lt;/h4&gt;

&lt;p&gt;The basic idea is to leverage reverse proxy server to split micro-frontend bundles per page and do a hard page reload based on the route URL.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HNuohFjG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AHRpPeIo4prEkPjL0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HNuohFjG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AHRpPeIo4prEkPjL0.png" alt="" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple to implement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Global state won’t be synced between the micro-frontend apps. This was a clear no-go point for us because we had long-running background operations performed on the client side. You might argue that we could persist snapshot of this operations “queue” to the local storage and read from it after hard-reload but due to security reasons, we were not able to implement this. This is just one example of global state but here is other example of how it can look like: state of the sidenav panels (expanded/collapsed), toast messages etc.&lt;/li&gt;
&lt;li&gt;The hard refresh when navigating across micro-apps is not very customer friendly. There is a way to cache shared HTML using service workers but it’s additional complexity to maintain.&lt;/li&gt;
&lt;li&gt;Additional operational and maintainance costs for the infrastructure: proxy server for each micro-frontend app (this can be avoided if read from the CDN directly), separate infrastructure to deploy common (vendor) dependencies to be re-used by multiple pages and properly cached by browsers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Edge-side composition
&lt;/h4&gt;

&lt;p&gt;Another approach to micro-frontend composition is edge-side composition, which involves combining micro-frontends at the edge layer, such as a CDN. For instance, Amazon CloudFront supports &lt;a href="https://aws.amazon.com/lambda/edge/"&gt;Lambda@Edge&lt;/a&gt; integration, enabling the use of a shared CDN to read and serve the micro-frontend content.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l1hDA0e7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AM57v238KkZZ_D386.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l1hDA0e7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AM57v238KkZZ_D386.png" alt="" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fewer infrastructure pieces to maintain: no need to have proxy servers, separate CDNs for each micro-app&lt;/li&gt;
&lt;li&gt;Virtually infinite scaling using serverless technology&lt;/li&gt;
&lt;li&gt;Better latency compared to standalone proxy servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold start time might become an issue&lt;/li&gt;
&lt;li&gt;Lambda@Edge is not supported in all AWS regions if you need to have multi-region (isolated) infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Client-side composition
&lt;/h4&gt;

&lt;p&gt;Client-side composition is another approach to micro-frontend architecture that utilizes client-side micro-frontend orchestration techniques, decoupled from the server implementation.&lt;/p&gt;

&lt;p&gt;The key player in this architecture is a container (shell) application that has the following responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Addressing cross-cutting concerns: The container application handles centralized app layout, site navigation, footer, and help panel. Integration with micro-frontends that have cross-cutting concerns occurs through an Event Bus, where synthetic events are sent and handled within the global window scope.&lt;/li&gt;
&lt;li&gt;Orchestration of micro-frontends: The container app determines which micro-frontend bundle to load and when, based on the application’s requirements and user interactions.&lt;/li&gt;
&lt;li&gt;Composing global dependencies: The container app composes all global dependencies, such as React, SDKs, and UI libraries, and exposes them as a separate bundle (vendor.js) that can be shared among the micro-frontends.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The general idea is each micro-frontend bundle would produce 2 types of assets files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;{hash}/index.js: This serves as the entry point for the micro-frontend application, with the hash representing a unique identifier for the entire build. The hash acts as a prefix key for each bundle in the S3 bucket. It’s important to note that multiple entry points might exist, but the hash remains the same for all of them.&lt;/li&gt;
&lt;li&gt;manifest.json: This is a manifest file that contains paths to all entry points for the micro-frontend application. This file would always leave in the root of the S3 bucket, so the container would be able to discover it easily. I recommend turning on versioning of this file in the S3 bucket in order to have better observability of changes. If you are using webpack to build your project, I highly recommend &lt;a href="https://www.npmjs.com/package/webpack-manifest-plugin"&gt;WebpackManifestPlugin&lt;/a&gt; that does all the heavy-lifting for you.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The container is only aware of the micro-frontend asset source domain URL (CDN origin) based on the stage and region. During the initial page load, the container downloads the manifest file for each micro-frontend application. The manifest file is tiny in size (~100 bytes) to avoid impacting page load time and scales well even when aggregating multiple micro-frontends within one container. It’s crucial to consider the manifest file as immutable in the browser’s cache storage to prevent aggressive caching.&lt;/p&gt;

&lt;p&gt;Choosing the right orchestration library is the biggest challenge in this composition and will be discussed in the following chapter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ojK_1ZCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A5IIN7QLAx0lda21j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ojK_1ZCJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A5IIN7QLAx0lda21j.png" alt="" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agnostic to server implementation: This approach can be implemented without any specific server requirements, offering flexibility in the backend technology used. As shown in the picture above, you can even don’t have any server&lt;/li&gt;
&lt;li&gt;Preserving global state: By using a container (shell) app, global state can be maintained when switching between micro-frontends. This ensures a seamless user experience and avoids losing context during transitions.&lt;/li&gt;
&lt;li&gt;Decentralized approach: Each micro-frontend can independently decide what data to ship to the browser to bootstrap itself. The container app simply follows a well-defined contract, allowing for greater autonomy and modularity.&lt;/li&gt;
&lt;li&gt;Simple local setup: Assets sources can be easily adjusted between production and local URLs based on development needs. The manifest file helps the container app discover and load the required micro-frontends. Developers can focus on running only the container and the specific micro-frontends they are working on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More network hops to fetch the manifest file: As the container needs to retrieve the manifest file for each micro-frontend, there may be additional network requests and potential latency compared to other composition approaches. This can be mitigated by loading all manifest upfront on the initial page load or by introducing some preloading techniques.&lt;/li&gt;
&lt;li&gt;Compliance with common contract: Every micro-frontend needs to adhere to a common contract for producing builds. This can be facilitated through shared configurations and standardized development practices to ensure consistency across the micro-frontends (more about this in the following parts).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Hybrid composition
&lt;/h4&gt;

&lt;p&gt;As I mentioned earlier in this chapter, all of these composition patterns can be mixed and matched within the same shell application. Here is the example of how it can look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bi6am7zj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A1Jccy-zbQOZIGtMB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bi6am7zj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A1Jccy-zbQOZIGtMB.png" alt="" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Recommendation
&lt;/h4&gt;

&lt;p&gt;I recommend starting with a homogenous approach in the beginning — select a composition pattern that suits you better and start building the infrastructure around it. For us, the client-side composition was the best option but for the future, we considered switching some regions to edge-side orchestration (based on the availability of Lambda@Edge).&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing orchestration library
&lt;/h3&gt;

&lt;p&gt;When it comes to implementing client-side composition in a micro-frontend architecture, selecting the right orchestration library is a critical decision. The chosen library will play a crucial role in managing the dynamic loading and coordination of micro-frontends within the container application. Several popular orchestration libraries exist, each with its own strengths and considerations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Single-spa
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://single-spa.js.org/"&gt;Single-spa&lt;/a&gt; is a widely adopted orchestration library that provides a flexible and extensible approach to micro-frontend composition. It allows developers to create a shell application that orchestrates the loading and unloading of multiple micro-frontends. Single-SPA provides fine-grained control over lifecycle events and supports different frameworks and technologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Framework agnostic: Library works well with various frontend frameworks like React, Angular, Vue.js, and more.&lt;/li&gt;
&lt;li&gt;Flexible configuration: It offers powerful configuration options for routing, lazy-loading, and shared dependencies.&lt;/li&gt;
&lt;li&gt;Robust ecosystem: Single-SPA has an active community and a rich ecosystem of plugins and extensions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Learning curve: Getting started with single-spa may require some initial learning and understanding of its concepts and APIs.&lt;/li&gt;
&lt;li&gt;Customization complexity: As the micro-frontend architecture grows in complexity, configuring and managing the orchestration can become challenging.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Qiankun
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/umijs/qiankun"&gt;Qiankun&lt;/a&gt; is a powerful orchestration library developed by the Ant Financial (Alibaba) team. It uses a partial HTML approach for composition. On the micro-frontend app side, it produces a plain HTML snippet with all entrypoints to be loaded. After consuming this HTML file, the container does all the orchestration and mounts the app. In this configuration, partial HTML plays the role of a manifest file that I talked about in the previous chapter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Framework agnostic: Qiankun supports various frontend frameworks, including React, Vue.js, Angular, and more.&lt;/li&gt;
&lt;li&gt;Simplified integration: Qiankun provides a set of easy-to-use APIs and tools for creating and managing micro-frontends.&lt;/li&gt;
&lt;li&gt;Scalability and performance: Qiankun offers efficient mechanisms for code sandboxing, state isolation, and communication between micro-frontends.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dependency conflicts: Managing shared dependencies and ensuring compatibility across micro-frontends may require careful configuration and consideration.&lt;/li&gt;
&lt;li&gt;Learning curve: While Qiankun provides extensive documentation, adopting a new library may involve a learning curve for your development team.&lt;/li&gt;
&lt;li&gt;Redundant data sent over the wire: The partial HTML snippet contains redundant data (body, meta, DOCTYPE tags) that needs to be sent via the network.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Module federation
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://module-federation.github.io/"&gt;Module Federation&lt;/a&gt;, a feature provided by Webpack, has gained significant attention and hype in the web development community. This technology allows developers to share code between multiple applications at runtime, making it an attractive option for building micro-frontends. With its seamless integration with Webpack and runtime flexibility, Module Federation has become a popular choice for managing and orchestrating micro-frontends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pros:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Seamless integration with Webpack: If you are already using Webpack as your build tool, leveraging Module Federation simplifies the setup and integration process.&lt;/li&gt;
&lt;li&gt;Runtime flexibility: Module Federation enables dynamic loading and sharing of dependencies, providing flexibility in managing micro-frontends.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Limited framework support: While Module Federation is compatible with multiple frontend frameworks, it may require additional configuration or workarounds for specific use cases.&lt;/li&gt;
&lt;li&gt;Community support: Module Federation is a relatively new technology, released as a core plugin in Webpack 5 (and later back-ported to &lt;a href="https://github.com/module-federation/webpack-4"&gt;v4&lt;/a&gt;). The Next.js library is also newer, being released as open source recently. As with all new tools, there may be a smaller community and less support available. It’s important to consider this factor if you have tight deadlines or anticipate encountering questions without readily available answers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In this first part of the “Micro-frontend Migration Journey” series, we have discussed the motivation behind migrating from a web monolith to a distributed architecture and the initial steps taken to sell the idea to the leadership. We explored the importance of a technical vision document that showcased detailed performance analysis and outlined the different phases of the migration.&lt;/p&gt;

&lt;p&gt;We then delved into the design considerations for micro-frontends, discussing three approaches: server-side composition, edge-side composition, and client-side composition. Each approach has its pros and cons, and the choice depends on various factors such as synchronization of the global state, customer experience, infrastructure complexity, and caching. Furthermore, we explored popular orchestration libraries, such as single-spa, qiankun and Module Federation, highlighting their features, benefits, and potential challenge&lt;/p&gt;

&lt;p&gt;Join me in the next parts of the series as we continue our micro-frontend migration journey, uncovering more interesting and valuable insights along the way!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/testing-redux-with-rtl/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on April 18, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>microfrontends</category>
      <category>architecture</category>
      <category>frontend</category>
      <category>frontendarchitecture</category>
    </item>
    <item>
      <title>Writing efficient frontend design documents</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Sun, 30 Apr 2023 06:54:12 +0000</pubDate>
      <link>https://dev.to/srshifu/writing-efficient-frontend-design-documents-2ef6</link>
      <guid>https://dev.to/srshifu/writing-efficient-frontend-design-documents-2ef6</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KA_ZEEmz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ARkLnwoEgdwLboSFV.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KA_ZEEmz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ARkLnwoEgdwLboSFV.jpg" alt="" width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Frontend design documents are an essential part of the software development process, but their importance is often overlooked in front-end development. While backend teams commonly write design documents during the planning phase, front-end design documents are undervalued and ignored, resulting in poorly designed user interfaces and wasted development time. This article aims to raise awareness of the significance of front-end design documents and offer guidance on how to create effective ones that can enhance the quality of your front-end development projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why do we need frontend design documents?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Reason #1: Get stakeholders on the same page
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The tremendous power of the written word makes it much harder to misunderstand one another.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learning.oreilly.com/library/view/the-staff-engineers/9781098118723/"&gt;&lt;em&gt;The Staff Engineer’s Path&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When multiple people are working on the same project or feature, it can be challenging to ensure that everyone has a shared understanding of the goals and requirements. A design document can provide a written plan that helps stakeholders agree on whether the team is solving the right problem, and whether their assumptions about other teams and existing systems are correct. This is especially important when teams are distributed across different locations or time zones.&lt;/p&gt;

&lt;p&gt;Additionally, different teams may have conflicting priorities or requirements, and a design document can help identify these issues early on in the process. By creating a design document, teams can cheaply iterate and ensure that everyone is aligned before investing significant time and resources in development. This can help prevent costly mistakes or rework down the line.&lt;/p&gt;

&lt;p&gt;Imagine discovering one month before the launch date that the new API you need to integrate with behaves differently than you expected or returns a different output. I recall an example from my days at AWS when our front-end team had to integrate with a third-party service’s API. We expected to see some critical information displayed as part of the resource ARN, but the third-party team expected us to call an additional API to describe the resource. However, we couldn’t make the additional call due to performance concerns (we had to display thousands of resources in the list), which caused a chain of escalations involving UX design, our service backend team, the external service team, the manager chain, and the frontend lead (me). This problem could have been avoided if we had a design document reviewed beforehand.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reason #2: Distributing the work
&lt;/h4&gt;

&lt;p&gt;When leading a front-end development project with a team, it’s crucial to share your vision of the high-level design with the team. This includes breaking down the work into manageable chunks of functionality that can be created in parallel by sub-teams, each with its own set of milestones and even its own design document.&lt;/p&gt;

&lt;p&gt;Splitting the work into workstreams can help ensure that progress is made efficiently and that the project stays on track. Workstreams can be thought of as separate phases, where a team completes a large piece of work, reorients, and then kicks off the next stage of the project. These streams may depend on each other at key junctures, and some streams may not start until others are completely finished.&lt;/p&gt;

&lt;p&gt;By creating a design document that outlines the high-level design and the different workstreams involved, you can distribute the work more effectively and ensure that each sub-team has a clear understanding of its responsibilities. This can help prevent confusion and reduce the risk of delays or miscommunications that could lead to wasted time and resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reason #3: Reduce the risk of “bus factor”
&lt;/h4&gt;

&lt;p&gt;By distributing the work through workstreams and documenting it in a design document, you can also reduce the risk of a ‘bus factor’. In other words, if something happens to the project lead or other team members, the next person to step into their role would not have to start from scratch. They can refer to the design document and pick up where the previous team left off, ensuring that progress continues and the project stays on track.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reason #4: Build a history of the project
&lt;/h4&gt;

&lt;p&gt;Another benefit of having design documents is to build a history of the project. Like a file version in Git, a design document serves as a point-in-time snapshot of the project’s architecture, which can be extremely useful for understanding why certain decisions were made in the past. For failed projects, design documents can serve as a basis for new beginnings or a good source of information to learn from mistakes in the future. By keeping a record of the design decisions, it’s easier to understand how the project evolved over time and what factors contributed to its success or failure.&lt;/p&gt;

&lt;p&gt;By having a detailed design document, new team members can quickly get up to speed on the project’s architecture, design decisions, and implementation details. This can reduce the time it takes for new team members to become productive and ensure that the project’s quality and consistency are maintained over time.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reason #5: Personal interests
&lt;/h4&gt;

&lt;p&gt;Another reason why design documents are crucial is for personal interests. If you’re aiming for a promotion or looking to showcase your contributions to the company, having a well-documented design document can be extremely beneficial. While a list of submitted PRs can demonstrate your technical skills, it doesn’t provide evidence of your ability to manage a project, lead a team, shape the work, or approach complex problems. A design document can serve as a record of your contributions, including your thought process and decision-making, which can be valuable evidence of your leadership abilities. Even if you didn’t contribute to the project by coding, being the lead who kick-started a big and complex project is already 50% of its success.&lt;/p&gt;

&lt;h3&gt;
  
  
  Types of design documents
&lt;/h3&gt;

&lt;p&gt;Design documents can be categorized based on their level of detail and purpose. The level of detail can range from high-level design documents, which provide a broad overview of the system architecture, to low-level design documents, which delve into the specifics of how individual components will be built. The purpose of a design document can also vary, with some documents focused on outlining the technical vision of the project, while others may focus on the technical strategy for achieving specific goals. Each type of design document serves a specific purpose and is created at a different stage in the project.&lt;/p&gt;

&lt;h4&gt;
  
  
  High-level design (HLD)
&lt;/h4&gt;

&lt;p&gt;High-level design documents provide a broad overview of the project’s architecture, defining the main components and their interactions. This type of document should answer questions such as what the key features of the application are, how they are connected, and what technologies will be used to develop them. High-level design documents are typically created during the planning phase of a project, and they serve as a roadmap for the development team.&lt;/p&gt;

&lt;p&gt;For example, during a brainstorming session with various stakeholders, a front-end lead might work with the project manager, back-end team lead, and UX designer to create a high-level design document. At this stage you don’t need to have fancy mockups from UX team -the entire system UI can be literally drawn on the whiteboard. High-level design document for this feature would outline the project’s goals, high-level features, business flows, and third-party integrations. While the document wouldn’t necessarily cover all the implementation details, it would provide a bird’s-eye view of the functional units that need to be built, how they are connected, and how they can be integrated into the existing system.&lt;/p&gt;

&lt;p&gt;In this document, you might be interested in highlighting the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;project milestones (e.g., “Build JSON policy editor widget”, “Build cross-region copy flow”, etc)&lt;/li&gt;
&lt;li&gt;for each milestone, define a high-level component tree with state-related concerns (if you are using a state management system like Redux, it might be important to highlight what data goes to the global state vs. what data is isolated in the component’s local state).&lt;/li&gt;
&lt;li&gt;If you are using distributed frontend architecture (micro-frontends), you may apply &lt;a href="https://thesametech.com/domain-driven-design-in-micro-frontend-architecture/"&gt;Domain-Driven Design&lt;/a&gt; methodology to draw new boundaries in the architecture. In other words, whether this new feature fits into existing subdomain/bounded context or requires a brand new micro-frontend app to be developed and deployed. If the latter, then how it will communicate with other apps, and what events will represent these communication concerns?&lt;/li&gt;
&lt;li&gt;API integration and permission model: what permissions are required to call specific API, and what would be the UX behaviour in case of lack of permissions (e.g., redirect to page B)&lt;/li&gt;
&lt;li&gt;Known limitations and uncertainties (e.g. “We won’t be able to use &lt;em&gt;ModalA&lt;/em&gt; component because it requires admin account permissions to build the view. We will have to wait for Auth team to release delegated admins type of account” or “If we decide to use the same advanced JSON editor as we on PageA, we need to investigate how we can manage shared third-party dependencies in the future”). This kind of considerations can be really useful for engineer who will be working closely on the low-level design for this milestone.&lt;/li&gt;
&lt;li&gt;Define business/operational metrics to track&lt;/li&gt;
&lt;li&gt;Rough estimates for each milestone (preferably, build a workstream diagram that will highlight dependencies between milestones, order of execution and level parallelization)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;High-level design documents are often reviewed multiple times to ensure that all high-level features are feasible to implement within the given timeline. During the review process, stakeholders may identify which features can be cut and which ones should be prioritized for launch in the project’s next phase. This next phase could be represented by different milestones, such as “GA Preview -&amp;gt; Full GA” or “MVP -&amp;gt; v1 -&amp;gt; v2”. While it may not be necessary to know the exact timelines of these phases, it is important to have a clear understanding of the business value of each phase and how it aligns with the overall project goals.&lt;/p&gt;

&lt;p&gt;High-level design documents are typically more useful for complex features or projects that involve multiple teams or systems. For smaller features or projects with fewer dependencies, it may be more efficient to start with a more detailed design approach. The level of detail required for a design document can vary depending on the project’s size, scope, and complexity. It’s important to strike a balance between providing enough information for the development team to understand the requirements and leaving room for flexibility and iteration as the project progresses.&lt;/p&gt;

&lt;h4&gt;
  
  
  Low-level (detailed) design
&lt;/h4&gt;

&lt;p&gt;Detailed design documents provide more in-depth information about the project’s architecture and implementation. These documents include technical specifications, diagrams, and code snippets to guide the development team. They are usually created after the high-level design documents have been reviewed and agreed upon.&lt;/p&gt;

&lt;p&gt;Usually high-level and low-level designs are maintained by different authors. But if your front-end team is small and you don’t have a lot of opportunities to delegate, it may not be necessary to create a separate low-level design document. In such cases, you may include more detailed information in the high-level design document or create a hybrid document that includes both high-level and low-level details. The level of detail in the design documents should be appropriate to the scope and complexity of the project, and should provide enough guidance to the development team without being overly prescriptive.&lt;/p&gt;

&lt;p&gt;What details you can include in your design document:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enhanced component tree with more fine-grained components and well-defined state shapes and component contracts&lt;/li&gt;
&lt;li&gt;If your app requires complex API orchestration, you may benefit from having a detailed flow diagram to highlight the order of API execution and when to update components with new state&lt;/li&gt;
&lt;li&gt;Error handling for special use cases&lt;/li&gt;
&lt;li&gt;Performance concerns, such as caching and network optimization&lt;/li&gt;
&lt;li&gt;Security concerns, such as permission schemas to be publicly shared, storing sensitive information in the local storage/IndexDB, CSRF protection, CSP headers&lt;/li&gt;
&lt;li&gt;Alternative design considerations&lt;/li&gt;
&lt;li&gt;Technologies and tools to be used&lt;/li&gt;
&lt;li&gt;Test scenarios (usually speaking about E2E tests here)&lt;/li&gt;
&lt;li&gt;Accurate estimates of work and level of parallelization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The low-level design document is primarily intended for the development team and their technical lead, as it includes more technical details and implementation specifics. It may also be shared with other stakeholders such as the product manager or UX designer if they need to review and provide feedback on specific implementation details. However, the main audience for this type of document is the development team itself.&lt;/p&gt;

&lt;h4&gt;
  
  
  Technical vision
&lt;/h4&gt;

&lt;p&gt;Technical vision documents outline the long-term goals for the project’s technology stack and infrastructure. This document describes the future as you’d like it to be once the objectives have been achieved and the biggest problems are solved. Describing how everything will be after the work is done makes it easier for everyone to imagine that world without getting hung up on the details of getting there. You can write a technical vision at any scope, from a grand picture of the whole engineering organization down to a single team’s work. Your vision may inherit from documents at larger scopes, and it may influence smaller ones.&lt;/p&gt;

&lt;p&gt;A technical vision doesn’t set out to make all of the decisions, but it should remove sources of conflict or ambiguity and empower everyone to choose their own path while being confident that they’ll end up at the right place.&lt;/p&gt;

&lt;p&gt;There’s no particular standard for what a vision looks like. It can include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High-level architecture of the project&lt;/li&gt;
&lt;li&gt;Set of guidelines and principles for the team or entire organization (e.g., testing best practices, Redux shape vision, applying feature-sliced design, etc)&lt;/li&gt;
&lt;li&gt;Summary of decisions being made&lt;/li&gt;
&lt;li&gt;Definition of the processes (e.g., on-call rotations, addressing technical debt, etc)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, the main goal of this document is to describe a &lt;em&gt;clear&lt;/em&gt; (have a biased or fixed perspective and be resistant to considering other viewpoints or opinions) and &lt;em&gt;realistic&lt;/em&gt; (no unicorns) optimistic future that meets the needs of your team/organization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Technical strategy
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;Technical strategy&lt;/em&gt; documents provide guidance on how to achieve the goals outlined in the &lt;em&gt;technical vision&lt;/em&gt; documents. It should include a plan of action describing the step-by-step process of achieving your vision. While a &lt;em&gt;technical vision&lt;/em&gt; is more focused on the end goal and ideal state, a &lt;em&gt;technical strategy&lt;/em&gt; details the concrete steps and challenges required to get there.&lt;/p&gt;

&lt;p&gt;For instance, if you’re planning to migrate your project to a new architecture and have a vision for how it should be structured, your technical strategy document might include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A migration approach, such as whether to migrate all at once or use a strangler pattern&lt;/li&gt;
&lt;li&gt;A phased plan for the migration, including guiding policies for each phase&lt;/li&gt;
&lt;li&gt;Identification of the challenges that will arise in each phase and proposals for overcoming them, such as success criteria or a definition of done&lt;/li&gt;
&lt;li&gt;A rollout plan for releasing the migrated project to users.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Frontend design document template
&lt;/h3&gt;

&lt;p&gt;Design documents often vary depending on the specific project and the team creating them. As such, there is no universal template or one-size-fits-all approach to structuring these documents. However, I can share some design document templates that I personally use when creating features.&lt;/p&gt;

&lt;h4&gt;
  
  
  Project metadata
&lt;/h4&gt;

&lt;p&gt;Project metadata is important information about a document that provides context and helps with version control. This includes the name of the author, the date the document was created, the date of the last modification, and the current status of the document (such as Draft, Reviewed, or Discarded). If the document is a low-level design, it can also include a link to the high-level design document for reference.&lt;/p&gt;

&lt;h4&gt;
  
  
  Motivation
&lt;/h4&gt;

&lt;p&gt;The motivation section of the design document should provide a brief overview of the problem that the feature or project is aiming to solve. It should focus on the “what” rather than the “how”, and can be just a few sentences in length. The purpose of this section is to provide context and set the stage for the rest of the document. Do not include implementation details of your design and design/technical decision here — it would be addressed later in the document.&lt;/p&gt;

&lt;h4&gt;
  
  
  Terminology
&lt;/h4&gt;

&lt;p&gt;The terminology section of the design document should include any special terms or concepts that are specific to the project and may not be familiar to everyone on the team. This can include industry-specific jargon or acronyms, as well as any schema definitions or technical terms used in architecture diagrams. The purpose of this section is to ensure that everyone working on the project is using the same language and understanding the same concepts.&lt;/p&gt;

&lt;h4&gt;
  
  
  Design proposal
&lt;/h4&gt;

&lt;p&gt;The design section of the document outlines the approach and methodology for achieving the goals mentioned in the motivation section. It is important to include enough information in this section for the readers to evaluate the feasibility of the proposed solution. Visual diagrams are helpful in conveying complex ideas and should be included whenever possible. The text should highlight the key points and emphasize what is important. In some cases, it may be necessary to describe the order of execution in the diagram by assigning numbers to connecting arrows and providing additional information below the diagram. Avoid repeating the same information in both the text and diagrams.&lt;/p&gt;

&lt;p&gt;As mentioned earlier, visual diagrams are an effective way to convey complex ideas. For instance, in my recent &lt;a href="https://thesametech.com/infrastructure-from-a-diagram/"&gt;article&lt;/a&gt;, I demonstrate how you can create an infrastructure diagram using AWS Application Composer in just a few clicks. This tool can be extremely useful in creating clear and concise architecture diagrams that effectively communicate your design. Additionally, it can help you create a Proof of Concept for your architecture that your colleagues can interact with.&lt;/p&gt;

&lt;h4&gt;
  
  
  Alternative designs
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The “alternatives considered” section is where you demonstrate (to yourself and others!) that you’re here to solve the problem and you aren’t just excited about the solution. If you find yourself omitting this section because you&lt;/em&gt; didn’t &lt;em&gt;consider any alternatives, that’s a signal that you may not have thought the problem through.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learning.oreilly.com/library/view/the-staff-engineers/9781098118723/"&gt;&lt;em&gt;The Staff Engineer’s Path&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When presenting alternative designs, it is important to highlight the tradeoffs of each approach and explain why you ultimately did not recommend it as your proposed design. This will help your readers understand your thought process and reasoning for choosing a particular solution.&lt;/p&gt;

&lt;h4&gt;
  
  
  Risk considerations
&lt;/h4&gt;

&lt;p&gt;The risk considerations section should address any potential risks associated with the proposed design. This includes having a contingency plan in case the original design does not work, as well as identifying any potential side effects that could impact the existing architecture (e.g. performance, usability concerns). It is important to be transparent about these concerns and worst-case scenarios, as it can help to identify and address them before they become actual problems. Providing this information can also help to build trust with your readers and demonstrate that you have thought through the potential risks and have a plan in place to mitigate them. Sometimes, your readers might provide you some useful ideas how to address them!&lt;/p&gt;

&lt;h4&gt;
  
  
  Security considerations
&lt;/h4&gt;

&lt;p&gt;The security considerations section of the design document should address how the proposed solution will handle sensitive data, permissions policies, and protection against both insider and external threats. It is important to be transparent about any potential security risks and provide a plan for mitigating those risks. If your organization has a well-defined process for addressing security concerns, you should include a security threat model and review it with your security engineers.&lt;/p&gt;

&lt;h4&gt;
  
  
  Operations
&lt;/h4&gt;

&lt;p&gt;The operations section of the design document should address how the system will be operated once it is deployed. This includes business metrics, such as who will use the feature and how it will be used, as well as system health monitoring, such as ensuring that page loads successfully, tracking the latency of user actions, and monitoring web vital metrics. It is important to include details on how the system will be monitored and maintained, and any tools that will be used to ensure that the system is operating correctly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Testing scenario
&lt;/h4&gt;

&lt;p&gt;In the testing scenario section of the design document, it is important to define the happy and sad paths for each business flow in the new project. This information can be used as a source of truth for engineers writing end-to-end tests and for QA engineers conducting manual tests. The testing scenarios should cover all aspects of the business flow, including input validation, error handling, and edge cases. It is also important to document any known issues or limitations of the system and to include steps to reproduce them. By defining these testing scenarios, you can ensure that the project meets the requirements and functions as expected.&lt;/p&gt;

&lt;h4&gt;
  
  
  Appendices
&lt;/h4&gt;

&lt;p&gt;Appendices are optional sections in the design document and are typically used to provide additional information that may not be directly relevant to the core design. Examples of information that can be included in the appendix section are non-essential architecture diagrams, UX mockups, code snippets (can be even pseudo-code), links to tools documentation, and other related design documents. This information can be useful for readers who want to explore the design further or who want more detailed information about specific aspects of the design. However, it is important to ensure that the core design is fully explained in the main body of the document and that the appendices are not relied upon to convey critical information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, a well-written design document is a crucial component of any software development project. It provides a roadmap for how the project will be executed and helps to align all stakeholders on the goals and objectives. A good design document should include clear and concise information on the problem being addressed, the proposed solution, and any tradeoffs that were considered. Visual diagrams can be especially helpful in communicating complex ideas.&lt;/p&gt;

&lt;p&gt;It’s also important to consider risk and security considerations, as well as operational requirements and testing scenarios. By including all of these components in the design document, you can ensure that everyone involved in the project is on the same page and has a clear understanding of what needs to be done.&lt;/p&gt;

&lt;p&gt;Overall, taking the time to create a thoughtful and comprehensive design document can save a lot of time and effort in the long run, and can help to ensure the success of the project.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/writing-efficient-frontend-design-documents/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on April 30, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>design</category>
      <category>frontend</category>
      <category>designdocumentation</category>
    </item>
    <item>
      <title>Testing Redux with RTL</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Wed, 19 Apr 2023 00:00:06 +0000</pubDate>
      <link>https://dev.to/srshifu/testing-redux-with-rtl-2no3</link>
      <guid>https://dev.to/srshifu/testing-redux-with-rtl-2no3</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FV08JzCv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AnpboWY2x4Bh6BrPc.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FV08JzCv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AnpboWY2x4Bh6BrPc.jpeg" alt="" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;React Testing Library (RTL) is an end-to-end testing tool that’s designed to test your application’s UI from the user’s perspective (you can find more about why you should use it in my previous &lt;a href="https://thesametech.com/migrate-away-from-enzyme/"&gt;blog post&lt;/a&gt;). Since Redux is tightly integrated with React components, it’s typically tested as part of the components’ tests. Therefore, it’s important to have a solid understanding of how to write effective component tests using RTL before diving into testing Redux with RTL.&lt;/p&gt;

&lt;p&gt;Once you have a good grasp of testing React components with RTL, you can start to think about how to test your Redux store and the interactions between your components and the store. This is where testing with RTL can become particularly useful.&lt;/p&gt;

&lt;p&gt;In this blog post, we’ll explore whether or not it’s necessary to test Redux with RTL and provide guidance on when and how to do so. We’ll also cover best practices for testing Redux with RTL, including how to mock your Redux store and write effective integration tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit tests versus integration tests
&lt;/h3&gt;

&lt;p&gt;There has been a longstanding debate in the testing community about the boundaries between unit tests and integration tests and what constitutes a true “unit” of functionality.&lt;/p&gt;

&lt;p&gt;Traditionally, unit tests were defined as tests that isolated a single unit of code (e.g., a function or a class) and mocked all of its dependencies. The goal was to test the unit in isolation to ensure that it behaved as expected under various inputs and conditions.&lt;/p&gt;

&lt;p&gt;However, in recent years, a new philosophy has emerged that challenges this strict definition of unit tests. This philosophy, often referred to as “sociable unit testing,” encourages developers to write tests that exercise multiple units of code together in order to test their interactions and integration points.&lt;/p&gt;

&lt;p&gt;When it comes to testing Redux with RTL, it’s important to consider where the boundaries between units of functionality lie. On one hand, you could argue that a single Redux reducer or action creator is a “unit” that should be tested in isolation. On the other hand, you could argue that the Redux store itself is a unit that should be tested as a whole, along with its interactions with React components.&lt;/p&gt;

&lt;p&gt;Ultimately, the choice of whether to write “true” unit tests or “sociable” unit tests with RTL depends on the goals of your testing strategy and the specific requirements of your application. Let’s dive deeper.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to write sociable tests?
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;By far the most important way to ensure this is to write tests that invoke the system being tested in the same way its users would; that is, make calls against its public API&lt;/em&gt; &lt;a href="https://oreil.ly/ijat0"&gt;&lt;em&gt;rather than its implementation details&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. If tests work the same way as the system’s users, by definition, change that breaks a test might also break a user. As an additional bonus, such tests can serve as useful examples and documentation for users.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://learning.oreilly.com/library/view/software-engineering-at/9781492082781/"&gt;&lt;em&gt;Software Engineering at Google&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;blockquote&gt;
&lt;p&gt;The more your tests resemble the way your software is used, the more confidence they can give you.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Kent C. Dodds 🌌 (&lt;a class="mentioned-user" href="https://dev.to/kentcdodds"&gt;@kentcdodds&lt;/a&gt;) &lt;a href="https://twitter.com/kentcdodds/status/977018512689455106?ref_src=twsrc%5Etfw"&gt;March 23, 2018&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;




&lt;p&gt;Firstly, sociable tests can help to identify integration issues between your Redux store and your React components. Since Redux is tightly integrated with React, it’s important to test how they work together in order to ensure that your application is functioning correctly. For example, imagine a situation when the contract of Redux action creator has changed (one attribute was renamed), you successfully changed your unit tests to cover the change but React component is still using the old contract. Boom, you introduced a bug to production (assuming, you don’t have end-to-end canaries covering this use case).&lt;/p&gt;

&lt;p&gt;Secondly, sociable tests with RTL can help to reduce the amount of duplicated testing and provide more comprehensive coverage of your codebase by testing the interactions between different units of code. If you have already written component tests with RTL, you can leverage those tests to also test the interactions between the components and the Redux store. This can save time and effort in writing additional tests from scratch. Forget about writing separate tests for reducers, actions, thunks, middlewares, and whatsoever — creating unit tests for the component alone will cover all these Redux actors.&lt;/p&gt;

&lt;p&gt;Thirdly, sociable tests can help to improve the maintainability of your test suite by reducing the number of mock objects that you need to maintain. If you’re writing isolated unit tests, you may need to create many mock objects for each unit, which can be time-consuming and error-prone. By contrast, sociable tests with RTL can reduce the need for mock objects and simplify your test code.&lt;/p&gt;

&lt;p&gt;And last but not least. By writing sociable tests, you can future-proof your codebase and ensure that your tests will continue to work even if you make changes to your architecture — generally speaking, you make your tests framework-agnostic. One day you may decide to replace action thunks with stream-oriented rxjs middleware, upgrade your Redux architecture to use Redux toolkit, replace Redux with a different state management solution or get rid of Redux layer in general (welcome to heaven!) — your unit tests should pass!&lt;/p&gt;

&lt;p&gt;Of course, there are also some potential drawbacks to sociable testing, such as increased complexity, higher chance of introducing brittle tests (due to async nature of UI interactions) and longer test runtimes. However, the benefits can outweigh the drawbacks in many cases, particularly for applications with complex interactions between Redux and React.&lt;/p&gt;

&lt;h3&gt;
  
  
  When to write isolated unit tests for Redux?
&lt;/h3&gt;

&lt;p&gt;Writing unit tests for Redux can still be important, even if you’re also writing sociable tests with RTL. Here are a few situations where you might want to write unit tests specifically for Redux:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When testing complex logic in reducers: While it’s possible to test Redux reducers through component tests with RTL, it can sometimes be more convenient to write isolated unit tests for complex reducer logic. This can help to ensure that the reducer behaviour is correct, independent of any particular component interactions.&lt;/li&gt;
&lt;li&gt;When testing side effects in Redux middleware: If you have custom middleware in your Redux store, you may want to write isolated unit tests to ensure that the middleware is working correctly. Let’s say you have some action triggered when a user navigates to specific routes. This logic usually exists outside of the component and can’t be triggered via UI interactions.&lt;/li&gt;
&lt;li&gt;Differences in the number or order of calls to a function would cause undesired behaviour. Let’s say you have a massive thunk action that implements heterogeneous transactional save of multiple related items — it might be important to have DELETE requests going before POST requests. You can argue that the same thing can be asserted in the component tests but it will make them less state-focused and add redundant complexity (we will have to parse entire API payload as we don’t have access to Redux actions). But this is more a tradeoff rather than a hard limitation. If you feel that the same thing can be done in component tests with minimal effort and with the same level of confidence, then why not?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;NOTE:&lt;/em&gt; an approach, when unit tests validate how a function is called without actually calling the implementation of the function, is called &lt;em&gt;interaction testing&lt;/em&gt;. The opposite strategy is state testing. With state testing, you render the component under test and validate that either the correct value (or element) was rendered or that some other state in the component under test changed as expected. Always prefer state testing! But in some cases, it’s important to check that the right API endpoint with the correct payload was invoked after some UI interaction triggered by RTL test in the component. So it’s OK to mix both approaches in one test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;expect(await screen.findByRole('input')).toHaveValue(expectedValue); 
expect(fetchMock.mock.calls).toMatchSnapshot();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Always be cautious that you mock network calls. In my &lt;a href="https://thesametech.com/optimizing-unit-testing-javascript-tips-tools-best-practices/"&gt;post&lt;/a&gt; about best practices for unit tests, I already mentioned jest-offline library which might help to validate this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, whether or not you should write Redux tests with RTL depends on the specific needs and context of your project. While sociable tests with RTL can be a powerful tool for testing the interactions between Redux and React, there are still situations where you may want to write isolated unit tests for specific aspects of your Redux architecture.&lt;/p&gt;

&lt;p&gt;Ultimately, the key is to strike a balance between the different testing approaches and to choose the most appropriate approach for each particular scenario. By combining sociable tests with RTL and isolated unit tests for Redux, you can create a robust and comprehensive test suite that helps to ensure the quality and reliability of your application.&lt;/p&gt;

&lt;p&gt;Remember, the goal of testing is not to achieve 100% coverage, but to catch as many bugs and issues as possible before they make it to production. By using the right testing approaches and tools, you can improve the quality of your code and reduce the risk of bugs and errors in your application.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/testing-redux-with-rtl/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on April 18, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>testing</category>
      <category>reacttestinglibrary</category>
      <category>unittesting</category>
    </item>
    <item>
      <title>Can you trust AI?</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Sat, 01 Apr 2023 05:47:26 +0000</pubDate>
      <link>https://dev.to/srshifu/can-you-trust-ai-36ae</link>
      <guid>https://dev.to/srshifu/can-you-trust-ai-36ae</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NNNs6sNn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AB1roqmpvTWKSJFb8.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NNNs6sNn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AB1roqmpvTWKSJFb8.jpeg" alt="" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;While most of us are comfortable with using a microwave without understanding how it works, many don’t feel the same way about AI yet, especially if that AI makes important decisions about their lives.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Designing Machine Learning Systems By Chip Huyen&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the most important qualities of Machine Learning systems is their interpretability (or explainability) which is unfortunately ignored by many companies working with AI. Interpretability refers to the ability to understand and explain how an AI system works, how it arrives at its decisions, and why it makes those decisions. Interpretability is essential not only for building trust but also for identifying and addressing any biases or errors in the system. For example, would you trust AI system’s decisions to reject your loan application or provide you with a diagnosis of a serious illness?&lt;/p&gt;

&lt;p&gt;There are many different techniques for achieving interpretability in AI systems, ranging from simple methods such as visualizing decision trees or feature importance to more complex methods such as generating explanations using natural language processing (NLP) or neural networks. Each technique has its strengths and weaknesses, and the choice of method will depend on the specific application and the level of interpretability required.&lt;/p&gt;

&lt;p&gt;Ultimately, the goal of interpretability is to create AI systems that people can trust and understand. By building transparency and accountability into AI systems, we can ensure that they are used ethically and responsibly and benefit society as a whole.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Importance of Interpretability Across Industries
&lt;/h3&gt;

&lt;p&gt;Interpretability is crucial in various industries, not just finance or healthcare. In the criminal justice system, AI systems are utilized for risk assessment and predictive policing, making interpretability critical. Without understanding how an AI system arrives at a decision or prediction, it is challenging to ensure that the decision is fair and unbiased. A lack of interpretability could lead to discrimination against certain groups, particularly minorities who have historically faced biases in the justice system.&lt;/p&gt;

&lt;p&gt;Another example is the automotive industry, where AI is used for self-driving cars. If an AI system makes a mistake, it’s essential to understand why it happened to prevent it from happening again in the future. Moreover, interpretability can help engineers enhance the performance and safety of self-driving cars by identifying areas of improvement in the system.&lt;/p&gt;

&lt;p&gt;Furthermore, interpretability is important in the education industry, where AI systems are utilized to predict student performance and recommend personalized learning strategies. If an AI system recommends the wrong learning strategy, students could struggle to achieve their academic goals. Thus, it is crucial to understand how the system makes decisions to ensure that students receive the best possible education.&lt;/p&gt;

&lt;p&gt;Interpretability isn’t just optional for most ML use cases in the industry, but a requirement. Obviously, interpretability is not that important if you build a system to recognize cats in the pictures. The higher price of the mistake, the more your system needs to be self-explainable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Snapshot of AI state
&lt;/h3&gt;

&lt;p&gt;According to &lt;a href="https://hai.stanford.edu/ai-index-2019"&gt;AI Index 2019 Report&lt;/a&gt;, interpretability and explainability are identified as the most frequently mentioned ethical challenges across 59 Ethical AI principle documents. However, the report also highlights a concerning trend: only 19% of large companies are actively working to improve the explainability of their algorithms, while just 13% are taking steps to mitigate risks to equity and fairness, such as algorithmic bias and discrimination. This suggests that there is still much work to be done in promoting ethical and responsible AI practices across different industries and sectors.&lt;/p&gt;

&lt;p&gt;Furthermore, the report notes that there are significant disparities in AI adoption and development between countries, with the United States and China leading in terms of research output and investment. This raises important questions about the global implications of AI and the need for international collaboration and cooperation in shaping its development and governance.&lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://hai.stanford.edu/ai-index-2021"&gt;AI Index Report 2021&lt;/a&gt;, interpretability and explainability are still considered major challenges in the development and deployment of AI systems. However, the report notes some progress has been made in the last few years in improving the interpretability and explainability of AI systems.&lt;/p&gt;

&lt;p&gt;For instance, the report notes that the percentage of papers on explainability and interpretability in AI research has increased significantly in the last few years. Specifically, in 2020, there were 23 accepted papers focused specifically on this topic, while in 2021 this number grew almost 2x times (41 papers).&lt;/p&gt;

&lt;p&gt;Overall, while there has been some progress in improving the interpretability and explainability of AI systems since the 2019 AI Index Report, there is still much work to be done to ensure that AI systems are transparent, accountable, and trustworthy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Advancements in Interpretability research
&lt;/h3&gt;

&lt;p&gt;Recent advancements in interpretability research have made significant strides towards improving the transparency and accountability of AI systems. Here are some notable developments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Counterfactual explanations: Counterfactual explanations involve generating explanations for a model’s decision by generating a “what-if” scenario where the input data is modified. These explanations can help users understand why a particular decision was made and how it could have been different under different circumstances. Here is a practical &lt;a href="https://stocktwits.com/c/crypto/2023/03/17/heres-how-chatgpt-diversified-1000-in-crypto/"&gt;example&lt;/a&gt; of how to use this technique with ChatGPT.&lt;/li&gt;
&lt;li&gt;Attention-based methods: Attention-based methods aim to provide insight into the inner workings of neural networks by identifying which parts of the input data the network is focusing on during decision-making. These methods can be particularly useful for natural language processing tasks, as they can identify which words or phrases were most important in influencing the model’s output. One example of a system that uses attention-based methods is Google Translate. One of the key features of the Google Translate system is its ability to generate attention maps. These attention maps provide a visual representation of the parts of the input that the neural network is paying the most attention to when making its translations. This allows users to better understand how the system is arriving at its translations and can help to build trust in the system.&lt;/li&gt;
&lt;li&gt;Model distillation: Model distillation involves training a simpler, more interpretable model to mimic the behaviour of a more complex, less interpretable model. By doing so, the simpler model can provide more transparent explanations for the decisions made by the original model. Example: DistilBERT as simplified version of Google’s BERT deep learning model.&lt;/li&gt;
&lt;li&gt;Human-in-the-loop methods: Human-in-the-loop methods involve incorporating human feedback into the interpretability process. For example, a user may be asked to provide feedback on a system’s output, which can then be used to refine and improve the system’s interpretability. The most popular example you use every day is an email spam filter. Many email services use machine learning algorithms to filter out spam emails, but they also rely on user feedback to improve the accuracy of the filters. Users can mark emails as spam or not spam, which the algorithm uses to learn and improve. Recently, OpenAI also added a feedback loop to their ChatGPT language model: whenever you re-generate the answer, the model prompts feedback from you:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These advancements represent exciting progress towards making AI systems more transparent and accountable. As the field continues to evolve, it’s likely that new techniques and methods will emerge to further improve interpretability and enable users to make more informed decisions based on the output of AI systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Interpretability can be harmful
&lt;/h3&gt;

&lt;p&gt;While interpretability can be a valuable tool for understanding AI models and improving trust in their decisions, there are also cases where interpretability can be harmful. In some scenarios, an overly simplified or transparent model can actually compromise privacy and security. For example, in a healthcare context, it may not be desirable for all medical staff to have access to the full details of a patient’s diagnosis and treatment recommendations. In these cases, more opaque models that provide limited access to sensitive information may be preferable.&lt;/p&gt;

&lt;p&gt;Another important point to consider is that interpretability can also have negative consequences. For instance, providing too much information about how a system works can make it easier for attackers to identify weaknesses and find workarounds. This can be particularly problematic in sensitive applications, such as financial fraud detection or cybersecurity, where malicious actors could use this knowledge to their advantage.&lt;/p&gt;

&lt;p&gt;To illustrate this point, consider the case of Google AdSense, where too many clicks from the same user can trigger account suspension. While this rule was put in place to prevent fraudulent activities, it can also have unintended consequences. For example, if attackers know the exact threshold of clicks that will trigger the suspension, they can exploit this vulnerability and use it to generate fraudulent ad revenue. In the end, this could lead to millions of dollars in charges to Google’s customers. Thus, there is a delicate balance between providing enough information to ensure trust in AI systems and not providing so much information that it can be exploited by malicious actors.&lt;/p&gt;

&lt;p&gt;Therefore, while interpretability is a valuable tool in many cases, it’s important to recognize that there are situations where it may not be appropriate or beneficial.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, interpretability remains a critical challenge for AI systems. As AI continues to become more prevalent in our daily lives, it is important that we understand how these systems arrive at their decisions and predictions. The development of interpretable AI will not only enhance transparency and accountability but also enable better decision-making, reduce biases, and improve overall trustworthiness. The recent advancements in interpretability research have shown significant progress towards achieving these goals, but there is still much work to be done.&lt;/p&gt;

&lt;p&gt;As we move forward, it is crucial that researchers, policymakers, and industry professionals work together to ensure that AI systems are transparent and explainable. While there is no one-size-fits-all solution, the continued development and application of human-in-the-loop methods, attention-based models, model distillation, and Bayesian approaches offer promising avenues for future research.&lt;/p&gt;

&lt;p&gt;Ultimately, the success of AI in the future will depend on its ability to be trusted and understood by end-users. By prioritizing interpretability, we can ensure that AI systems are not only effective but also ethical and accountable.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/can-you-trust-ai/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on April 1, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>interpretability</category>
      <category>explainability</category>
      <category>chatgpt</category>
    </item>
    <item>
      <title>Automated Blog Promotion with ChatGPT, Twitter and AWS</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Fri, 03 Mar 2023 08:01:12 +0000</pubDate>
      <link>https://dev.to/srshifu/automated-blog-promotion-with-chatgpt-twitter-and-aws-57oo</link>
      <guid>https://dev.to/srshifu/automated-blog-promotion-with-chatgpt-twitter-and-aws-57oo</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S965H9aZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AP7kuahBFPdZXnJA4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S965H9aZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AP7kuahBFPdZXnJA4.jpg" alt="" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In today’s digital landscape, creating a blog is just the first step in establishing an online presence. To attract readers and build a following, bloggers need to promote their content effectively. In this blog post, we’ll explore the topic of automated blog promotion and how to use ChatGPT, Twitter API, and AWS to create a blog promotion toolkit that can help bloggers promote their content more efficiently. The toolkit is designed to automate the promotion process using cutting-edge technologies, making it easier for bloggers to reach their target audience.&lt;/p&gt;

&lt;p&gt;By following the steps outlined in this post, you’ll be able to create your own automated blog promotion toolkit and take your blog promotion efforts to the next level. Let’s dive in and see how ChatGPT, Twitter API, and AWS can help you promote your blog like a pro!&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Schedule automation&lt;/strong&gt;. The toolkit must run automatically on a schedule, without requiring manual intervention, to save users time and effort in promoting their content.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extendibility&lt;/strong&gt;. The toolkit should be easy to extend, allowing users to add new promotion tools as needed. The MVP version of the toolkit will contain only one tool designed to engage new readers by replying to tweets using OpenAI APIs and ChatGPT, with AI text generation to ensure that the Twitter bot responds using relevant context. In the future, we may want to add functionality (publishing to relevant communities) or even expand to new platforms (e.g. LinkedIn).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Politeness&lt;/strong&gt;. The tool must be polite and respectful to Twitter accounts, avoiding multiple replies to the same user within a short period to prevent appearing annoying or spammy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid runtime.&lt;/strong&gt; The toolkit should be able to run both in the AWS cloud and locally, providing users with the flexibility to choose the deployment method that works best for them.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;Creating high-quality content for a blog is a challenging task, but promoting that content to reach a wider audience can be even more difficult. Social media platforms like Twitter offer a powerful way to share blog posts and engage with readers, but manually managing a Twitter account and replying to tweets can be time-consuming and repetitive. Furthermore, engaging with Twitter users in a way that is polite and respectful can be challenging, particularly when trying to reach a large number of users. These challenges can make it difficult for bloggers to effectively promote their content on Twitter and reach new readers. In the next section, we will explore how our blog promotion toolkit can help address these challenges and enable bloggers to reach their promotion goals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;Our blog promotion tool aims to simplify the process of promoting blog content on Twitter by automating the process of finding relevant tweets and engaging with Twitter users. The tool takes a list of blog posts, each with its own set of hashtags and metadata. These hashtags and metadata are configured manually by the author of the blog (using the same ChatGPT usually 😀) and extracted directly from the HTML.&lt;/p&gt;

&lt;p&gt;Once the hashtags have been extracted, the toolkit passes them to the Twitter SearchTweets API, which returns a list of relevant tweets from oldest to newest. The toolkit then checks each tweet for various requirements, such as politeness and the author’s audience size. If a tweet qualifies, the toolkit feeds the tweet URL and blog post URL to the OpenAI API, which generates a relevant reply using the ChatGPT language model. Finally, the toolkit sends the generated reply using the Twitter Manage Tweets API.&lt;/p&gt;

&lt;p&gt;We can visualize this flow as a pipeline with the following key components:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--noTl2ycP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ANk2NYNsIRVhSvqqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--noTl2ycP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ANk2NYNsIRVhSvqqu.png" alt="" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By automating the process of finding relevant tweets and generating polite and engaging responses, our blog promotion toolkit can help bloggers save time and effort in promoting their content on Twitter. Additionally, the ability to extend the toolkit with new promotion tools in the future can provide even more ways to engage with readers and promote blog content.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools and libraries
&lt;/h3&gt;

&lt;p&gt;Our blog promotion toolkit is built using a variety of tools and libraries, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt; : a popular programming language that is widely used in data analysis, machine learning, and web development. This is my first time using this language to build something meaningful so please don’t judge me harshly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tweepy&lt;/strong&gt; : a Python library that provides easy access to the Twitter API, allowing developers to build applications that interact with Twitter. I will be using V2 &lt;a href="https://docs.tweepy.org/en/stable/client.html"&gt;Client&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI library&lt;/strong&gt; : a Python library that provides access to OpenAI’s powerful language models, including GPT-3.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BeautifulSoup&lt;/strong&gt; : a Python library for web scraping and parsing HTML documents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS services&lt;/strong&gt; : &lt;strong&gt;DynamoDB&lt;/strong&gt; (for storage), &lt;strong&gt;AWS Lambda&lt;/strong&gt; (for computing), &lt;strong&gt;AWS CloudWatch&lt;/strong&gt; (for event scheduling to trigger Lambda), and of course my favourite &lt;strong&gt;AWS App Composer&lt;/strong&gt; to integrate all of these services together into a working SAM template and deploy it via &lt;strong&gt;AWS CloudFormation&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;h3&gt;
  
  
  TLDR; Link to the code
&lt;/h3&gt;

&lt;p&gt;GitHub repo: &lt;a href="https://github.com/sr-shifu/blog-promo-toolkit"&gt;https://github.com/sr-shifu/blog-promo-toolkit&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Building infrastructure
&lt;/h3&gt;

&lt;p&gt;As mentioned earlier, I will use AWS App Composer UI to build my SAM template. I will not go deep into the details (you can find more information about how to work with it in one of my previous &lt;a href="https://thesametech.com/infra-as-diagram/"&gt;posts&lt;/a&gt;) — I will just leave my final diagram here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IlMckyS3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A_MHzTEce25WloPxp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IlMckyS3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2A_MHzTEce25WloPxp.png" alt="" width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Infrastructure is pretty simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EngageTweets&lt;/strong&gt; lambda function. I decided to set its timeout to the maximum (15 mins) to give it as much time as possible: OpenAI API can be slow, and Twitter search API might have pretty long cooldown timeouts after being throttled.&lt;/li&gt;
&lt;li&gt;Event rule (AWS::Events::Rule) that will trigger &lt;strong&gt;EngageTweets&lt;/strong&gt; lambda function using this cron expression (unfortunately UI does not support this configuration, so you need to write it by hand): cron(0 18 * * ? *). This expression can be decoded to human language as &lt;em&gt;"at 18:00 UTC every day"&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IntegrationTokens&lt;/strong&gt; Secrets Manager instance to store tokens to access Twitter and OpenAI APIs. &lt;strong&gt;EngageTweets&lt;/strong&gt; lambda will consume them via environment variables (again, need to configure manually in the template).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RepliedTweets&lt;/strong&gt; table. It will be used to store information about tweets that have been already replied — it would be used by both the &lt;em&gt;SearchTweets&lt;/em&gt; and &lt;em&gt;TweetFilter&lt;/em&gt; components. It will use userId as a partitioning key and tweetId as a range (sort) key. There are also 2 additional attributes that are worth mentioning:

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;TTL&lt;/em&gt;: expiration key (UNIX time of when the record needs to be deleted from the table). The idea behind this is to make storage more efficient. Twitter’s SearchTweets &lt;a href="https://developer.twitter.com/en/docs/twitter-api/tweets/search/api-reference/get-tweets-search-recent"&gt;API&lt;/a&gt; can search tweets only for the last 7 days, so there is no need to store tweet reply data forever. We can reply to the tweet, set TTL to be current_time + 7d , and make sure that we will not disturb the author during the next 7 days or will not reply to the same tweet again.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;searchKey&lt;/em&gt;: query string that was used to search this tweet (about this later). The idea here is to build Global Secondary Index (GSI) using searchKey as the primary key and tweetId as a range key. Using this index we can find the latest tweet that was replied to in case the previous function execution was interrupted (for instance, exceeded lambda timeout). You may ask why don't store reply time in the table and use it as a sort key instead. The answer is simple: Twitter uses &lt;a href="https://blog.twitter.com/engineering/en_us/a/2010/announcing-snowflake"&gt;Snowflake&lt;/a&gt; IDs that guarantee that all tweets follow the rough order (&lt;em&gt;roughly sortable&lt;/em&gt;). Search API also supports since_id a request parameter that will give us all tweets that were sent after the last replied tweet id stored in our table. In case if our tool didn't run during the last 7 days and all data in the table was purged, it's not a problem at all - remember, API returns only the recent 7 days.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Deeper into the code
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Initialize Tweepy client:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;client = tweepy.Client(
    bearer_token=bearer_token,
    consumer_key = consumer_key,
    consumer_secret = consumer_secret,
    access_token = access_token,
    access_token_secret = access_token_secret,
    wait_on_rate_limit = True
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;wait_on_rate_limit = True&lt;/code&gt; the client will swallow all throttling exceptions and wait for the API to cool down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate reply using openAI:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def generate_tweet_reply_message(tweet_url, post_url, lang = 'en'):
    prompt = f"Reply to tweet {tweet_url}. Reply must include link to article {post_url} and engage to follow @TheSameTech{' using ' + lang + ' language' if lang != 'en' else ''}. Don't exceed {str(MAX_TWITTER_MESSAGE_LENGTH)} chars"
    completions = openai.Completion.create(
        engine="text-davinci-003",
        prompt=prompt,
        max_tokens=300,
        n=1,
        stop=None,
        temperature=0.7,
    )
    author_id = re.search(r"twitter\.com/([^/]+)/status", tweet_url).group(1)
    # this is one of the weird stuff I noticed - sometimes ChatGPT tags author using their ID, and not account name
    return completions.choices[0].text.replace("\n\n", "").replace(f" @{author_id}", "")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;engine&lt;/code&gt;: This parameter specifies the ID of the OpenAI language model to use for generating text. In this example, the text-davinci-003 model (ChatGPT-3) is used, which is one of OpenAI's most advanced models.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prompt&lt;/code&gt;: This parameter specifies the text prompt to use as input to the language model. In the blog promotion toolkit, the prompt will be the relevant tweet that the ChatGPT algorithm will be generating a response to.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;max_tokens&lt;/code&gt;: This parameter specifies the maximum number of tokens (words or punctuation marks) that the language model should generate in its response.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;n&lt;/code&gt;: This parameter specifies the number of responses to generate. In this case, only one response will be generated.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;stop&lt;/code&gt;: This parameter specifies a sequence of tokens that should be used as a stopping point for the language model's response. In this case, no stopping sequence is specified.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;temperature&lt;/code&gt;: This parameter controls the randomness of the language model's responses. A higher temperature value will produce more creative and varied responses, while a lower temperature value will produce more predictable and conservative responses. In this example, a temperature of 0.7 is used, which should produce responses that are creative but not too unpredictable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;UPDATE&lt;/strong&gt; : OpenAI released new gpt-3.5-turbo language model a few days after I wrote this post (&lt;em&gt;Mar 7, 2023&lt;/em&gt;). It's priced at $0.002 per 1K tokens, which is 10x cheaper than the existing GPT-3.5 models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Search tweets:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;twitter_metadata = extract_twitter_metadata(post_url, tagsSelector='.post-tags')
        keywords, hash_tags, description, *rest = twitter_metadata
        combos = list(combinations(hash_tags, 2))
        for combo in(combos):
            hash_tags_string = " ".join(combo)
            latest_tweet_id = get_latest_activity(hash_tags_string)
            if latest_tweet_id is None and search_days_ago is not None:
                start_time=(datetime.datetime.now() - datetime.timedelta(days=search_days_ago)).strftime("%Y-%m-%dT%H:%M:%SZ")
            tweets = search_recent_tweets_with_pagination(query=hash_tags_string, max_results = 100, start_time=start_time, latest_tweet_id=latest_tweet_id, tweet_fields=['id', 'author_id', 'created_at', 'in_reply_to_user_id', 'lang'])
            # do other stuff (filtering, generating reply, promoting)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After Twitter metadata is extracted from HTML, I use hashtags as search criteria to find relevant tweets. Usually, every blog post has about 4–5 tags associated with it. If I pass down all of them together, most likely Twitter API will return nothing — that’s why I combine them by pairs and use every pair as a search key (so every blog post will generate from 6 to 10 search requests in general).&lt;/p&gt;

&lt;h3&gt;
  
  
  Running locally
&lt;/h3&gt;

&lt;p&gt;With SAM, you don’t need to do a lot. Only execute 3 commands and you are good to go. If you want to run the local version of DynamoDB, please follow the README instructions in my GitHub repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam build
sam local start-lambda
aws lambda invoke --function-name "EngageTweets" --endpoint-url "http://127.0.0.1:3001" --no-verify-ssl out.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s the same simple using Python executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd src/engage-tweets-lambda
pip install -r requirements.txt
source env/bin/activate   
TABLE_NAME=PromotedTweets python engage_tweets.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just make sure you have all tokens stored in your local &lt;code&gt;.env&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying to AWS
&lt;/h3&gt;

&lt;p&gt;One command you need to know:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Limitations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Twitter API rate limit: Twitter API has a rate limit that limits the number of API calls that can be made per user per 15-minute window. This means that if a user exceeds the rate limit, they may not be able to make any further API calls until the limit resets (but we already got this covered — see previous section).&lt;/li&gt;
&lt;li&gt;Twitter monthly limit to search tweets: Twitter limits the number of tweets that can be searched in a given month. The current limit is 500,000 tweets per month per developer account, which is also subject to change.&lt;/li&gt;
&lt;li&gt;OpenAI API is not free: The OpenAI API charges based on the number of requests and responses (tokens) sent and received. While we have obtained a free trial package of $18, continued use of the API will require payment based on usage.&lt;/li&gt;
&lt;li&gt;Recent Twitter API changes: Twitter has recently made changes to its API that impact the availability of certain features and data. For example, as of February 2022, Twitter has suspended access to the user profile and follower count endpoints, making it difficult to determine the size of a tweet author’s audience. This may impact the effectiveness of our blog promotion toolkit.&lt;/li&gt;
&lt;li&gt;Twitter’s recent announcement regarding automation: Twitter has announced new rules regarding automation on the platform, aimed at preventing spam and abusive behaviour. While the full extent of these changes is not yet clear, they may impact the functionality of our blog promotion toolkit. We will continue to monitor developments and adjust our approach as needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These limitations highlight the challenges involved in building and maintaining a blog promotion toolkit that relies on external APIs and platforms. Despite these limitations, I believe that our toolkit can still be effective in promoting our content and engaging with our audience. By staying informed and adapting to changes as they arise, we can continue to use these tools to achieve our goals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final words
&lt;/h3&gt;

&lt;p&gt;In conclusion, our blog promotion toolkit is designed to help content creators reach new audiences by engaging with relevant Twitter users. By leveraging the power of AI language models and Twitter’s API, we are able to generate personalized replies to tweets that mention our blog post’s relevant content in the context of the target tweet message. However, it’s important to note that there are limitations to this approach, including Twitter API rate limits and monthly search limits, as well as the cost of using OpenAI’s API for language processing.&lt;/p&gt;

&lt;p&gt;Overall, I believe that this toolkit can be a valuable asset for bloggers and content creators looking to promote their work on social media. I look forward to continuing to refine and improve this toolkit, and we welcome any feedback or suggestions from the community. Thank you for considering my toolkit, and I hope it can help you achieve your content promotion goals!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/automated-blog-promotion-with-chatgpt-twitter-and-aws/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on March 3, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>openai</category>
      <category>aws</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Snapshot testing in React Testing Library</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Sat, 04 Feb 2023 00:35:06 +0000</pubDate>
      <link>https://dev.to/srshifu/snapshot-testing-in-react-testing-library-4gkc</link>
      <guid>https://dev.to/srshifu/snapshot-testing-in-react-testing-library-4gkc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllnewwxerrn93z0saljm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllnewwxerrn93z0saljm.jpeg" alt="Snapshot testing RTL" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Snapshot testing is a powerful technique for ensuring that your React components behave as expected over time. With React Testing Library, you can easily implement snapshot testing to verify that your components are rendering correctly. In this article, we will explore the process of snapshot testing with React Testing Library, including why it’s a useful tool and how you can use it effectively in your own projects.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why you should migrate to RTL?
&lt;/h3&gt;

&lt;p&gt;In my previous &lt;a href="https://thesametech.com/migrate-away-from-enzyme/" rel="noopener noreferrer"&gt;post&lt;/a&gt;, I explained why you should migrate away from Enzyme and what tools you can build to support this migration. React Testing Library (RTL) is a great choice for migrating from Enzyme because it encourages testing the behaviour of the application from the user’s perspective, rather than testing the internal implementation details. This makes the tests more robust and less prone to breaking when the implementation changes because they are focused on the outcome of an action, rather than testing the details of the implementation. This leads to tests that are more readable, easier to maintain and less brittle. But RTL can become a stumbling block if your project heavily relies on the JSX snapshots, or in other words, you are too lazy to write explicit assertions for your React components and simply do &lt;code&gt;expect(component).toMatchSnapshot()&lt;/code&gt; in all of your tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why RTL is not good for snapshots?
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;When your snapshot is more than a few dozen lines it’s going to suffer major maintenance issues and slow you and your team down.&lt;/em&gt; &lt;strong&gt;Remember that tests are all about giving you confidence that you wont ship things that are broken&lt;/strong&gt; &lt;em&gt;and you’re not going to be able to ensure that very well if you have huge snapshots that nobody will review carefully. I’ve personally experienced this with a snapshot that’s over 640 lines long. Nobody reviews it, the only care anyone puts into it is to nuke it and retake it whenever there’s a change…&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Kent C. Dodds (creator of RTL)&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;React Testing Library is not necessarily “bad” for snapshot testing, but it is not designed to be used primarily for that purpose. Snapshot testing is often used to check that a component’s output remains the same, even if its implementation changes. While with Enzyme, you could simply use shallow API to produce a compact render tree you could match with the snapshot, with React Testing Library you would have to operate with the real DOM. Using Material UI as an example, if you had to snapshot a component that composes DataGrid component from this library, you would receive the following outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// with Enzyme shallow 
&amp;lt;DataGrid
  rows={[/*Rows data goes here*/]}
  columns={[/*Columns goes here*/]}
  pageSize={5}
  rowsPerPageOptions={[5]}
  checkboxSelection
/&amp;gt;

// with RTL
&amp;lt;div
  class="MuiDataGrid-root MuiDataGrid-root--densityStandard MuiDataGrid-withBorderColor css-1wgqnsa"
  role="grid"
  aria-colcount="6"
  aria-rowcount="2"
  aria-multiselectable="false"
&amp;gt;
  &amp;lt;div&amp;gt;&amp;lt;/div&amp;gt;
  &amp;lt;div class="MuiDataGrid-main css-opb0c2"&amp;gt;
    &amp;lt;div
      class="MuiDataGrid-columnHeaders MuiDataGrid-withBorderColor css-1pzw8os"
      style="min-height: 56px; max-height: 56px; line-height: 56px;"
    &amp;gt;
      &amp;lt;div
        class="MuiDataGrid-columnHeadersInner css-vcjdx3"
        role="rowgroup"
        style="transform: translate3d(0px, 0px, 0px);"
      &amp;gt;
        &amp;lt;div role="row" aria-rowindex="1" class="css-k008qs"&amp;gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I intentionally did not put the entire HTML output here in the example, but you already see the difference. And you can also notice that HTML output contains hashed class names — this means, any time you consume a new version of DataGrid component, you would get a new hash (this methodology is called “scoped class names” and it’s out of scope for this article).&lt;/p&gt;

&lt;h3&gt;
  
  
  What to do?
&lt;/h3&gt;

&lt;p&gt;A piece of advice from Kent C. Dodds: “ &lt;strong&gt;avoid huge snapshots and take smaller, more focused ones. While you’re at it, see if you can actually change it from a snapshot to a more explicit assertion (because you probably can 😉)&lt;/strong&gt;”. There is also a lint rule in eslint-plugin-jest called &lt;a href="https://github.com/jest-community/eslint-plugin-jest/blob/master/docs/rules/no-large-snapshots.md" rel="noopener noreferrer"&gt;“no-large-snapshots”&lt;/a&gt; that could be pretty helpful in discouraging large snapshots.&lt;/p&gt;

&lt;p&gt;However, I have another solution for you that is future-proof (compatible with the latest versions of React and RTL) and would not require you to re-write all the snapshot tests in your codebase. Moreover, it would be almost identical to the shallow snapshot produced by Enzyme! Word of caution — it works only with Jest, you may need to write another solution if you want to integrate it with something else.&lt;/p&gt;

&lt;p&gt;The solution is based on &lt;a href="https://www.npmjs.com/package/react-shallow-renderer" rel="noopener noreferrer"&gt;react-shallow-renderer&lt;/a&gt; library that is part of React ecosystem (see React &lt;a href="https://reactjs.org/docs/shallow-renderer.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;). You can be sure that it will work with the latest version of React (at the moment of writing it is 18.2.0) — here you can find the &lt;a href="https://github.com/enzymejs/react-shallow-renderer/commit/bb9f556223bc517750114974814025275a0098c4" rel="noopener noreferrer"&gt;commit&lt;/a&gt; from Dan Abramov.&lt;/p&gt;

&lt;p&gt;Without further ado, let me introduce you to a solution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import React from 'react';
import ShallowRenderer from 'react-test-renderer/shallow';
import { isFragment, isLazy, isPortal, isMemo, isSuspense, isForwardRef } from 'react-is';

class ReactShallowRenderer {
  instance = null;
  shallowRenderer = null;
  constructor(children, { Wrapper = null } = {}) {
    this.shallowRenderer = new ShallowRenderer();
    this.shallowWrapper = Wrapper
      ? this.shallowRenderer.render(&amp;lt;Wrapper&amp;gt;{children}&amp;lt;/Wrapper&amp;gt;)
      : this.shallowRenderer.render(children);
  }

  getRenderOutput() {
    if (!this.shallowWrapper) return undefined;
    const getNodeName = node =&amp;gt; {
      return node.displayName || node.name || '';
    };

    const getWrappedName = (outerNode, innerNode, wrapperName) =&amp;gt; {
      const functionName = getNodeName(innerNode);
      return outerNode.type.displayName || (functionName !== '' ? `${wrapperName}(${functionName})` : wrapperName);
    };

    const extractType = node =&amp;gt; {
         if (!this.shallowWrapper) return this.shallowWrapper;
    const getNodeName = node =&amp;gt; {
      return node.displayName || node.name || '';
    };

    const getWrappedName = (outerNode, innerNode, wrapperName) =&amp;gt; {
      const functionName = getNodeName(innerNode);
      return outerNode.type.displayName || (functionName !== '' ? `${wrapperName}(${functionName})` : wrapperName);
    };

    const extractType = node =&amp;gt; {
      if (typeof node === 'string') return node;
      const name = getNodeName(node.type) || node.type || 'Component';
      if (isLazy(node)) {
        return 'Lazy';
      }

      if (isMemo(node)) {
        return `Memo(${name || extractType(node.type)})`;
      }

      if (isSuspense(node)) {
        return 'Suspense';
      }

      if (isPortal(node)) {
        return 'Portal';
      }

      if (isFragment(node)) {
        return 'Fragment';
      }
      if (isForwardRef(node)) {
        return getWrappedName(node, node.type.render, 'ForwardRef');
      }
      return name;
    };

    const transformNode = node =&amp;gt; {
      const extractProps = ({ children, ...props }, key) =&amp;gt; {
        const childrenArray = Array.isArray(children) ? children : [children];
        return {
          children: childrenArray.filter(Boolean).flatMap(transformNode),
          props: {
            ...props,
            ...(key ? { key } : {}),
          },
        };
      };
      if (Array.isArray(node)) {
        return node.map(transformNode);
      }
      if (typeof node !== 'object') {
        return node;
      }
      return {
        // this symbol is used by Jest to prettify serialized React test objects: https://github.com/facebook/jest/blob/e0b33b74b5afd738edc183858b5c34053cfc26dd/packages/pretty-format/src/plugins/ReactTestComponent.ts
        $$typeof: Symbol.for('react.test.json'),
        type: extractType(node),
        ...extractProps(node.props, node.key),
      };
    };

    return transformNode(this.shallowWrapper);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The critical part here is how you output the tree. Jest has special requirements to print the React Test object in a prettified fashion:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test object needs to have &lt;code&gt;$$typeof: Symbol.for('react.test.json')&lt;/code&gt; in order to get integrated with the Jest serializer&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;type&lt;/code&gt; refers to the name of the node that will be displayed (&lt;code&gt;&amp;lt;ComponentName propA="1234" /&amp;gt;&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;children&lt;/code&gt; is an optional property that takes all children nodes that need to follow the same format, and &lt;code&gt;props&lt;/code&gt; - component properties that will be displayed as node attributes in the output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can wrap it into a utility function and use it in your tests. The simplest example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const shallowRender = Component =&amp;gt; new ReactShallowRenderer(Component).getRenderOutput();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are using Redux, you can enhance the example from Redux &lt;a href="https://redux.js.org/usage/writing-tests" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interface ExtendedRenderOptions extends Omit&amp;lt;RenderOptions, 'queries'&amp;gt; {
  preloadedState?: PreloadedState&amp;lt;RootState&amp;gt;
  store?: AppStore
  mode?: 'deep' | 'shallow'
}

export function renderWithProviders(
  ui: React.ReactElement,
  {
    preloadedState = {},
    // Automatically create a store instance if no store was passed in
    store = configureStore({ reducer: { user: userReducer }, preloadedState }),
    mode = 'deep',
    ...renderOptions
  }: ExtendedRenderOptions = {}
) {
  function Wrapper({ children }: PropsWithChildren&amp;lt;{}&amp;gt;): JSX.Element {
    return &amp;lt;Provider store={store}&amp;gt;{children}&amp;lt;/Provider&amp;gt;
  }

  let renderOutput;
  if (mode === 'shallow') {
    renderOutput = {
      container: new ShallowRenderer(ui, { Wrapper })
        .getRenderOutput(),
    };
  } else {
    renderOutput = render(ui, { wrapper: Wrapper, ...renderOptions });
  }
  // Return an object with the store and all of RTL's query functions
  return { store, ...renderOutput }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is an example of how you can use it in your tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;it('should match snapshot', () =&amp;gt; {
   expect(shallowRender(&amp;lt;MyComponent /&amp;gt;)).toMatchSnapshot();
});

it('should match snapshot with Redux', () =&amp;gt; {
   const {container} = renderWithProviders(&amp;lt;MyComponent /&amp;gt;, {mode: 'shallow'});
   expect(container).toMatchSnapshot();
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, migrating away from Enzyme to React Testing Library can bring numerous benefits to your React application’s testing process. React Testing Library focuses on testing the behaviour of a component from the user’s perspective, making the tests more robust, readable, and easier to maintain. However, it’s important to keep in mind that React Testing Library operates with the real DOM, and can produce large snapshots, which should be kept small and focused.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/snapshot-testing-in-rtl/" rel="noopener noreferrer"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on February 4, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89" rel="noopener noreferrer"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/" rel="noopener noreferrer"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>react</category>
      <category>snapshottesting</category>
      <category>javascript</category>
      <category>testing</category>
    </item>
    <item>
      <title>Optimizing Unit Testing in JavaScript: Tips, Tools, and Best Practices</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Tue, 24 Jan 2023 08:50:32 +0000</pubDate>
      <link>https://dev.to/srshifu/optimizing-unit-testing-in-javascript-tips-tools-and-best-practices-mo0</link>
      <guid>https://dev.to/srshifu/optimizing-unit-testing-in-javascript-tips-tools-and-best-practices-mo0</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70xj6e9l5r7qibdn0fu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70xj6e9l5r7qibdn0fu6.png" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, we will focus on optimizing unit testing in JavaScript. We will discuss some of the best tools for unit testing such as Jest, jest-circus, and React Testing Library (RTL) and show you how to configure these tools for optimal performance. We’ll also provide tips and best practices for improving your testing workflow, so you can ensure that your code is reliable and working as expected. Whether you’re a seasoned developer or just getting started with unit testing, this guide will help you improve your testing process and produce high-quality code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit tests
&lt;/h3&gt;

&lt;p&gt;I prefer to use the following tools for unit testing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://jestjs.io/" rel="noopener noreferrer"&gt;Jest&lt;/a&gt;. It’s a JavaScript testing framework that can be easily integrated with any other testing and development tool.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;jest-circus&lt;/code&gt; test runner built on top of Jest and providing parallelized execution capabilities for tests. Starting with Jest v27, it comes as the default test runner in jest configuration.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://testing-library.com/docs/react-testing-library/intro/" rel="noopener noreferrer"&gt;React Testing Library&lt;/a&gt;. It’s a go-to tool for testing React components. I used to have Enzyme on this list a few years ago but unfortunately, it’s not maintained anymore and I highly don’t recommend using it in your project. You can read about the strategy and tools I use to migrate projects from Enzyme to RTL in my &lt;a href="https://thesametech.com/migrate-away-from-enzyme/" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuration
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;React Testing Library setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;JSDOM’s implementation of &lt;code&gt;getComputedStyle()&lt;/code&gt; is super slow. As I do not usually do any style-specific assertions and validations in my unit tests, I prefer to mock this API to speed up RTL tests by up to 2x.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// somewhere in your setup file
global.getComputedStyle = () =&amp;gt; {
  return {
    getPropertyValue: () =&amp;gt; {
      return undefined;
    },
  };
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Alternatively, you can switch from JSDOM implementation to LightDOM. You can try both approaches together as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No network calls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Network calls in unit tests might significantly slow down test performance. There is an easy solution for this: &lt;code&gt;jest-offline&lt;/code&gt; library. It will fast-fail all tests that attempt to access the network. If you are using &lt;code&gt;fetch&lt;/code&gt; API in your project then &lt;code&gt;jest-fetch-mock&lt;/code&gt; will automatically mock all API requests for you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { enableFetchMocks } from 'jest-fetch-mock'; 

enableFetchMocks();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But some third-party libraries might use different mechanisms to integrate with the network API (good old XHR requests, for instance). Therefore, I prefer to use these 2 tools together — they perfectly complement each other.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transpiler&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you are using &lt;code&gt;babel-jest&lt;/code&gt; or &lt;code&gt;ts-jest&lt;/code&gt; in your Jest configuration to transpile JavaScript/TypeScript files, then I have bad news for you - they will slow down your tests too! Nowadays, cool kids use &lt;code&gt;swc/jest&lt;/code&gt; or &lt;code&gt;esbuild&lt;/code&gt; transpilers that are written by faster Rust and Go languages respectively. It's easy to switch - highly recommend you do it right away. Really a low-hanging fruit for improving your test performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools
&lt;/h3&gt;

&lt;p&gt;Use &lt;code&gt;jest-slow-test-reporter&lt;/code&gt; to identify what tests are the slowest! Usually, you will get a significant performance boost by addressing just a small portion of your tests. My rule of thumb is unit test execution time should not exceed 300ms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Local development
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Commands:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   "test": "TZ=America/Los_Angeles LANG=en_US.UTF-8 jest --config jest.config.json --passWithNoTests --maxWorkers=50%",
   "test:bail": "npm test -- --bail",
   "test:diff": "npm test -- -o --watch",
   "test:clean": "jest --clearCache"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hints:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;TZ=America/Los\_Angeles LANG=en\_US.UTF-8&lt;/code&gt; environment variables are needed to lock dates and timezones - crucial when you have Node v14+ and use date-fns-tz in your code.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--maxWorkers=50%&lt;/code&gt; usually help to improve test performance by 20% but requires careful benchmarking. Inspired by this &lt;a href="https://dev.to/vantanev/make-your-jest-tests-up-to-20-faster-by-changing-a-single-setting-i36"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I prefer to use JSON file (&lt;code&gt;jest.config.json&lt;/code&gt;) for configuration rather than &lt;code&gt;.js&lt;/code&gt; or &lt;code&gt;.ts&lt;/code&gt;. Reason: &lt;code&gt;jest&lt;/code&gt; is requesting &lt;code&gt;ts-node&lt;/code&gt; from the transpiling of &lt;code&gt;jest.config.ts&lt;/code&gt;. For TypeScript file improvement usually 2X, for JS files it's pretty much the same as JSON.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;--bail&lt;/code&gt; command is useful when you need to do a quick sanity check whether all tests are still passing - run tests until the first fails. If fails, use a more suitable command for continuous troubleshooting and fixing the tests.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;test:diff&lt;/code&gt; will run only tests for files staged in your local Git workspace and will do it in watch mode. This is the most popular command in my arsenal that I usually run while I refactor some existing code (for example, re-writing React class component to functional component). With good coverage and well-written test code, it's super easy to catch regressions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Submitting code review
&lt;/h3&gt;

&lt;p&gt;If you still don’t use &lt;code&gt;husky&lt;/code&gt; and &lt;code&gt;lint-staged&lt;/code&gt; as part of Git pre-commit hook integration, then you should. Here is the configuration I usually have:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  "lint-staged": {
    "*.{js,jsx}": [
      "eslint -c ./.eslintrc.js --fix",
      "git add",
      "npm test -- --findRelatedTests"
    ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;npm test -- --findRelatedTests&lt;/code&gt; will run only tests related to files that have changed. You can read more about how it works in one of my previous &lt;a href="https://thesametech.com/under-the-hood-jest-related-tests/" rel="noopener noreferrer"&gt;articles&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Storybook
&lt;/h3&gt;

&lt;p&gt;If your project uses StoryBook to run your components in isolation (let’s say you have an internal UI library), then you can also include &lt;code&gt;start-storybook --smoke-test&lt;/code&gt; to your pre-commit configuration - it will execute dry-run of the start command and fail immediately if something is wrong (maybe you have some breaking change in the component contract?).&lt;/p&gt;

&lt;p&gt;Also, you can use storyshot add-on to automatically cover all components in your UI library with snapshot tests.&lt;/p&gt;

&lt;h3&gt;
  
  
  Continuous integration (CI)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Command&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"test:ci": "npm run test -- --runInBand"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;According to this GitHub &lt;a href="https://github.com/facebook/jest/issues/8202" rel="noopener noreferrer"&gt;issue&lt;/a&gt;, &lt;code&gt;--runInBand&lt;/code&gt; performs better in CI environments rather than local ones due to the resource constraints of such environments. Basically speaking, with &lt;code&gt;runInBand&lt;/code&gt; we tell Jest to run tests serially rather than orchestrate a thread pool of test workers. Please benchmark it if you can - from my previous experience, "it depends". I guess I was lucky enough to have powerful CI machines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, unit testing is a crucial part of software development that ensures the quality and reliability of your code. By choosing the right tools and configuring them properly, you can improve the performance of your tests and streamline your testing workflow.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/optimizing-unit-testing-javascript-tips-tools-best-practices/" rel="noopener noreferrer"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on January 24, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89" rel="noopener noreferrer"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/" rel="noopener noreferrer"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>unittest</category>
      <category>jest</category>
      <category>testing</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>Migrate away from Enzyme</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Mon, 16 Jan 2023 06:37:11 +0000</pubDate>
      <link>https://dev.to/srshifu/migrate-away-from-enzyme-6e4</link>
      <guid>https://dev.to/srshifu/migrate-away-from-enzyme-6e4</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d3i2tkc6jlzxuqhkeho.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0d3i2tkc6jlzxuqhkeho.jpg" alt="upgrade jump" width="576" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What will you learn from this article?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Applying an incremental (piecemeal) approach to migrate away from Enzyme;&lt;/li&gt;
&lt;li&gt;How to create a custom ESLint plugin with rules and formatter;&lt;/li&gt;
&lt;li&gt;How to use the new plugin in order to prevent pushing new code using Enzyme’s APIs as well as tracking the progress of the migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What is wrong with Enzyme?
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Enzyme&lt;/em&gt; is a popular testing library for React applications, but it has limitations. One of the main criticisms of Enzyme is that it encourages a testing style focused on implementation details rather than behaviour. This can lead to brittle tests that break easily when the implementation of a component changes. Another limitation of Enzyme is that it is tightly coupled to the React library. This makes it challenging to use with other libraries or frameworks and can make it harder to write tests that are truly isolated from the implementation of the component.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If your project is using React+Enzyme and you dream about upgrading React version to 18 (and enjoying cool features like server-side and concurrent rendering) then I have bad news for you — Enzyme is dead and there would be no official adapters released compatible with future versions of React (you can still find some unofficial libraries but it would not trust them).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;MISSION&lt;/strong&gt; of this article is not only to explain the approach you may take to shift your test code to a new library but also to propose an idea of how you can automate the monitoring of the migration progress.&lt;/p&gt;

&lt;h3&gt;
  
  
  Time to migrate!
&lt;/h3&gt;

&lt;p&gt;Migrating from Enzyme to a different testing library can be a daunting task, especially if you’re working on a large codebase. However, by taking an incremental approach, you can make the process much more manageable.&lt;/p&gt;

&lt;p&gt;An incremental migration involves gradually transitioning your tests from Enzyme to the new library, rather than trying to make the switch all at once. This approach has several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It allows you to test the new library in a smaller, more controlled environment before committing to a full-scale migration.&lt;/li&gt;
&lt;li&gt;It minimizes the risk of introducing regressions or breaking existing tests.&lt;/li&gt;
&lt;li&gt;It allows you to learn and become familiar with the new library as you go, rather than trying to absorb everything at once.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s an example of how you might approach an incremental migration from Enzyme to a new library like React Testing Library (RTL):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start by identifying the tests in your codebase that are using Enzyme. These are typically the tests that import Enzyme or Enzyme-specific methods (such as &lt;code&gt;shallow&lt;/code&gt; or &lt;code&gt;mount&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Begin by migrating a small subset of these tests to RTL. This might be a component or set of components that you’re already familiar with, or a section of the codebase that doesn’t have a lot of dependencies.&lt;/li&gt;
&lt;li&gt;As you migrate each test, pay attention to how the test is structured and how it interacts with the component being tested. Take note of any differences in how the new library handles things like querying for elements or simulating events.&lt;/li&gt;
&lt;li&gt;As you migrate more tests, you’ll start to get a better sense of how RTL differs from Enzyme. Take advantage of this learning opportunity to refactor your tests and improve their overall structure and readability.&lt;/li&gt;
&lt;li&gt;Repeat steps 2–4 until all of your tests are using the new library.&lt;/li&gt;
&lt;li&gt;Finally, After you’ve completed your migration, make sure to run your test suite to make sure everything is working as expected.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, an incremental migration is a great way to move away from Enzyme and transition to a new testing library. By taking a step-by-step approach, you can minimize the risk of breaking things, learn the new library as you go, and make the overall migration process much more manageable. However, it is hard to know whether this approach is successful or not without knowing the metrics that describe the success. As tech lead of the team, how can you know that team follows the strategy you came up with and how far is the end ( &lt;em&gt;step #6&lt;/em&gt;)?&lt;/p&gt;

&lt;h3&gt;
  
  
  TLDR; Link to the code
&lt;/h3&gt;

&lt;p&gt;GitHub repo: &lt;a href="https://github.com/sr-shifu/eslint-plugin-enzyme-deprecation" rel="noopener noreferrer"&gt;https://github.com/sr-shifu/eslint-plugin-enzyme-deprecation&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Write a plugin to track the progress!
&lt;/h3&gt;

&lt;p&gt;What tool do you usually use to enforce the coding style and find potential errors in your project? You probably got it right — &lt;a href="https://eslint.org/" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt;! (If you answered TSLint, you’re a little behind the times!). ESLint is highly customizable, you can set your own rules, use your own formatter or combine both inside one plugin! It is also can be easily integrated into a development workflow, such as a continuous integration pipeline, to automatically report any issues before they committed.&lt;/p&gt;

&lt;p&gt;Writing a custom ESLint plugin is not as hard as you may think but you might need to spend more time learning deeper about Abstract Syntax Tree (tree representation of the abstract syntactic structure of source code written in a programming language) and ESLint &lt;a href="https://eslint.org/docs/latest/extend/selectors" rel="noopener noreferrer"&gt;selectors&lt;/a&gt; to traverse this tree. Without further ado, let me introduce my solution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const noShallowRule = require("./rules/no-shallow");
const noMountRule = require("./rules/no-mount");

const rules = {
  "no-shallow": noShallowRule,
  "no-mount": noMountRule,
};
module.exports = {
  rules,
  configs: {
    recommended: {
      rules,
    },
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;THE END!&lt;/p&gt;

&lt;p&gt;.&lt;/p&gt;

&lt;p&gt;.&lt;/p&gt;

&lt;p&gt;.&lt;/p&gt;

&lt;p&gt;.&lt;/p&gt;

&lt;p&gt;Just kidding! Now, will come the most interesting part!&lt;/p&gt;

&lt;h3&gt;
  
  
  Rules are rules
&lt;/h3&gt;

&lt;p&gt;Let’s go deeper into the rule’s code. Both &lt;code&gt;no-shallow&lt;/code&gt; and &lt;code&gt;no-mount&lt;/code&gt; rules use the same logic (the idea to break them apart is simply for giving users more flexibility on what they want to deprecate), so let's dive deep into one of those (I picked &lt;code&gt;shallow&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const schema = require("./schema");
const astUtils = require("ast-utils");

const resolveEnzymeIdentifierInScope = (scope, name) =&amp;gt; {
  if (!scope) {
    return false;
  }
  const node = scope.set.get(name);
  if (node != null) {
    const nodeDef = node.defs[0];
    if (
      nodeDef.type === "ImportBinding" &amp;amp;&amp;amp;
      nodeDef.parent.source.value === "enzyme"
    ) {
      return true;
    }

    if (
      astUtils.isStaticRequire(nodeDef.node.init) &amp;amp;&amp;amp;
      astUtils.getRequireSource(nodeDef.node.init) === "enzyme"
    ) {
      return true;
    }
  }

  return false;
};

module.exports = {
  meta: {
    messages: {
      noShallowCall: "Enzyme is deprecated: do not use shallow API.",
    },
    docs: {
      description: "Disallow Enzyme shallow rendering",
      category: "Tests",
      recommended: true,
    },
    schema,
    fixable: null,
  },

  create(context) {
    const [options = {}] = context.options || [];
    return {
      "CallExpression"(node) {
        if (
          node.callee.name !== "shallow" &amp;amp;&amp;amp;
          node.callee.property?.name !== "shallow"
        ) {
          return;
        }
        let targetDeclarationName = "shallow";
        if (node.callee.property?.name === "shallow") {
          targetDeclarationName = node.callee.object.name;
        }
        const resolved = context
          .getScope()
          .references.find(
            ({ identifier }) =&amp;gt; identifier.name === targetDeclarationName
          ).resolved;
        const isEnzyme = resolveEnzymeIdentifierInScope(
          resolved?.scope,
          targetDeclarationName
        );
        if (isEnzyme || options.implicitlyGlobal) {
          context.report({ node, messageId: "noShallowCall" });
        }
      },
    };
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;CallExpression&lt;/code&gt; is an Enzyme selector that tells Enzyme that we are interested only in function calls. These selectors are pretty similar to CSS ones, you can learn more about them &lt;a href="https://eslint.org/docs/latest/extend/selectors" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;node.callee.name&lt;/code&gt; refers to the name of the called function (in our case, shallow), while &lt;code&gt;node.callee.property?.name&lt;/code&gt; checks whether this function was called as a property of a higher-order object (for instance, &lt;code&gt;const enzymeApi = require('enzyme'); enzymeApi.shallow(&amp;lt;Component /&amp;gt;&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;context.getScope()&lt;/code&gt; gives a reference to the scope where the target function (&lt;code&gt;shallow&lt;/code&gt;) was called and has a reference to the object that owns this shallow method. Basically, what we need to check here is that shallow method belongs to the enzyme source - usually enzyme imported into the test module, or required if you are using CommonJS (if you are curious, about how you can write the library that produces build for both EcmaScript Modules and CommonJS targets, go to this &lt;a href="https://thesametech.com/how-to-build-typescript-project/" rel="noopener noreferrer"&gt;article&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;options.implicitlyGlobal&lt;/code&gt; is an option that can be provided by the consumer of the rule, for instance in &lt;code&gt;.eslintrc.js&lt;/code&gt; config file. In this specific example, it allows users to tell the rule that they are not interested in the source that shallow comes from &lt;em&gt;(maybe, you have assigned it to the global scope somewhere in your test setup flow - bad idea IMHO).&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Report your progress
&lt;/h3&gt;

&lt;p&gt;For brave souls who made it to this part, thank you, and let’s continue! Now we have rules that we can use to prevent pushing deprecated APIs as part of new and changed code in your PRs (hopefully, you are using Jest’s &lt;a href="https://thesametech.com/under-the-hood-jest-related-tests/" rel="noopener noreferrer"&gt;findRelatedTests&lt;/a&gt; API as part of your Git pre-commit flow).&lt;/p&gt;

&lt;p&gt;But we still don’t know how things look from a bird-eye view — maybe the team is mostly working on new parts of the project, completely forgetting about some legacy systems &lt;em&gt;(or bypassing pre-commit hook — yikes!)&lt;/em&gt;. For this case, we need to write a custom formatter to output the statistics.&lt;/p&gt;

&lt;p&gt;I don’t want to throw huge blocks of the code here — you may find it in my GitHub &lt;a href="https://github.com/sr-shifu/eslint-plugin-enzyme-deprecation/blob/main/lib/formatter/dataFormatter.js" rel="noopener noreferrer"&gt;repo&lt;/a&gt; but I will try shortly to explain how it works. After running the rules and collecting errors for each test file, ESLint plugin passes this metadata to the formatter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ 
    filePath: string; 
    messages: Array&amp;lt;{ruleId: string;}&amp;gt; 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the formatter, we group this data by file path and by violated rule id (e.g., &lt;code&gt;enzyme-deprecation/no-shallow&lt;/code&gt;), and pass this processed data to the visualizer that might output this data in different formats. Just a few ideas about what these formats can be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ASCII printed charts in the terminal (for running in local/dev environment)&lt;/li&gt;
&lt;li&gt;Markdown-based file written to the filesystem and pushed to the Git repo (for reviewing progress after each individual PR)&lt;/li&gt;
&lt;li&gt;HTML page using beautiful chart libraries (like D3.js) written to &lt;code&gt;coverage&lt;/code&gt; folder (assuming you might already have some integrations with this folder in your code review tool)&lt;/li&gt;
&lt;li&gt;Plain string message passed to some webhook URL (e.g. Slack notification to code review channel)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How to use it in your project?
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Option 1: Define separate ESLint config for migration
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;.eslintrc.migration.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module.exports = {
  parser: '&amp;lt;your-parser&amp;gt;',
  extends: ['plugin:enzyme-deprecation/recommended'],
  env: {
    browser: true,
  },
  rules: {
    'enzyme-deprecation/no-shallow': 2
  }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And in your &lt;code&gt;package.json&lt;/code&gt; file define command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"track:migration": "NODE_ENV=development eslint --no-eslintrc --config .eslintrc.migration.js -f node_modules/eslint-plugin-enzyme-deprecation/lib/formatter --ext .test.jsx src/"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option 2: Using Node.js API
&lt;/h3&gt;

&lt;p&gt;You can find an example &lt;a href="https://github.com/sr-shifu/eslint-plugin-enzyme-deprecation/blob/main/examples/run.js" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;em&gt;(run &lt;code&gt;npm run demo&lt;/code&gt; command in the root directory)&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final words
&lt;/h3&gt;

&lt;p&gt;In summary, an incremental migration approach, combined with automation of monitoring, can help you to migrate your codebase to a new testing library in a controlled and efficient manner. This will help you to write more consistent, error-free code and catch issues early in the development process.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/migrate-away-from-enzyme/" rel="noopener noreferrer"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on January 16, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89" rel="noopener noreferrer"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/" rel="noopener noreferrer"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>migration</category>
      <category>enzyme</category>
      <category>react</category>
      <category>eslint</category>
    </item>
    <item>
      <title>Build your infrastructure from a diagram</title>
      <dc:creator>Ildar Sharafeev</dc:creator>
      <pubDate>Wed, 04 Jan 2023 08:09:35 +0000</pubDate>
      <link>https://dev.to/srshifu/build-your-infrastructure-from-a-diagram-53l4</link>
      <guid>https://dev.to/srshifu/build-your-infrastructure-from-a-diagram-53l4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UrzaI0bf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AhoJU5VOTgtC6pdIj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UrzaI0bf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AhoJU5VOTgtC6pdIj.jpg" alt="cover image" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many of you probably have already heard about the Infrastructure as Code (IaC) process that employs a declarative approach to manage and provision system infrastructure via machine-readable definition files. There are a lot of tools/frameworks/Cloud services nowadays ( &lt;a href="https://www.terraform.io/"&gt;Terraform by HashiCorp&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt;, &lt;a href="https://aws.amazon.com/serverless/sam/"&gt;AWS Serverless Application Model (SAM)&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt;, &lt;a href="https://cloud.google.com/deployment-manager/"&gt;Google Deployment Manager&lt;/a&gt;) that offer this out of the box. But what if I tell you that you can achieve this by simply dragging and dropping blocks on a diagram without writing a line of configuration and/or code?&lt;/p&gt;

&lt;p&gt;At re:Invent 2022, AWS announced the &lt;a href="https://aws.amazon.com/application-composer/"&gt;Application Composer&lt;/a&gt; service (currently in preview) that helps to streamline and accelerate the architecture, configuration, and build of serverless applications.&lt;/p&gt;

&lt;p&gt;This hands-on tutorial will include the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how to create infrastructure via AWS Application Composer Console&lt;/li&gt;
&lt;li&gt;how to test generated infrastructure template locally using SAM and DynamoDB Local image&lt;/li&gt;
&lt;li&gt;creating simple Node.js applications to read/persist data to DynamoDB&lt;/li&gt;
&lt;li&gt;deploying infrastructure and code into your AWS account using SAM&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Development Environment prerequisites
&lt;/h3&gt;

&lt;p&gt;Things you need to do before starting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create your AWS account to access AWS Console&lt;/li&gt;
&lt;li&gt;Install AWS SAM: &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html#install-sam-cli-instructions"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html#install-sam-cli-instructions&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install AWS CLI: &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Install Docker: &lt;a href="https://docs.docker.com/get-docker/"&gt;https://docs.docker.com/get-docker/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Drawing infrastructure diagram
&lt;/h3&gt;

&lt;p&gt;Let’s imagine we need to build an infrastructure for a simple TODO application where user can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;read all todo items&lt;/li&gt;
&lt;li&gt;create new todo&lt;/li&gt;
&lt;li&gt;update todo by either changing its title or completing it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go to Application Composer Console and click the “Create project” button. Ensure you are creating a project in Connected mode — this way Application Composer will automatically sync changes between the browser and your local filesystem. This cool functionality is available thanks to browser-native &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/File_System_Access_API"&gt;File System Access API&lt;/a&gt;. Note, that if you are using Chrome/Edge/Opera, the browser will request special permission to grant read-write access to the folder you specified.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c0vPwj-k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ADVGp4QTPBA616ou4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c0vPwj-k--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2ADVGp4QTPBA616ou4.png" alt="" width="800" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you are using Firefox or Safari, this mode will not available — this is because &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/FileSystemHandle/requestPermission"&gt;FileSystemHandle.requestPermission&lt;/a&gt;and &lt;a href="https://developer.mozilla.org/en-US/docs/Web/API/FileSystemHandle/queryPermission"&gt;FileSystemHandle.queryPermission&lt;/a&gt; APIs are still experimental. However, it does only mean that you won’t be able to enjoy the automatic sync of the infrastructure template changes and have to manually download it every time you apply changes to your diagram.&lt;/p&gt;

&lt;p&gt;Now let’s go to the most interesting part — dragging and dropping 🙂 I draw the following diagram in my project:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d1UsE7OB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AUxAKGIqfZIEwPAaG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d1UsE7OB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/0%2AUxAKGIqfZIEwPAaG.png" alt="" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When designing my infrastructure, I decided to predict a read-write pattern for my app and assumed that it will be more read-heavy — hence, may require different scaling approaches for read and write operations. So I created two Lambda functions that use API Gateway as an event source. That’s basically it!&lt;/p&gt;

&lt;p&gt;Now you can go to your project folder and find generated SAM template. Application Composer even defined &lt;code&gt;DynamoDBCrudPolicy&lt;/code&gt; execution policy for my Lambda functions (narrowed down to ToDosTable resource - least privilege principle, how cool is that?). However, I still recommend manually changing it to &lt;code&gt;DynamoDBReadPolicy&lt;/code&gt; for our &lt;code&gt;GetToDos&lt;/code&gt; function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementation
&lt;/h3&gt;

&lt;p&gt;Now let’s add some code before debugging our application locally.&lt;/p&gt;

&lt;p&gt;Let’s define our DynamoDB mapper first (I used &lt;code&gt;@aws/dynamodb-data-mapper&lt;/code&gt; library for this):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const {
  DynamoDbSchema,
  DynamoDbTable,
  DataMapper,
} = require("@aws/dynamodb-data-mapper");

class ToDoItem {
  get [DynamoDbTable]() {
    return process.env.TABLE_NAME; // Table name will be passed via environment variables
  }
  get [DynamoDbSchema]() {
    return {
      Id: {
        type: "String",
        keyType: "HASH",
      },
      Title: { type: "String" },
      CreatedAt: {
        type: "Number",
      },
      ModifiedAt: {
        type: "Number",
      },
      CompletedAt: {
        type: "Number",
      },
    };
  }
}

class ToDoItemMapper {
  constructor(client) {
    this.mapper = new DataMapper({
      client, // the SDK client used to execute operations
    });
  }
  scan() {
    return this.mapper.scan(ToDoItem);
  }
  getById(id) {
    const item = new ToDoItem();
    item.Id = id;
    return this.mapper.get(item);
  }
  put(item) {
    return this.mapper.put(item);
  }
  update(item) {
    return this.mapper.update(item);
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our list handler will look as simple as this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mapper = new ToDoItemMapper(ddbClient);

exports.handler = async () =&amp;gt; {
  const iterator = mapper.scan();
  const todoItems = [];
  for await (const record of iterator) {
    todoItems.push(transform(record));
  }
  return {
    statusCode: 200,
    body: JSON.stringify(todoItems),
  };
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CreateOrUpdate handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const mapper = new ToDoItemMapper(ddbClient);

const createToDo = async ({ title }) =&amp;gt; {
  if (!title) {
    throw new Error(
      "InvalidParameterException: title attribute is required"
    );
  }
  const item = new ToDoItem();
  const now = Date.now();
  item.Id = uuid.v4();
  item.Title = title;
  item.CreatedAt = now;
  item.ModifiedAt = now;

  const persisted = await mapper.put(item);
  return transformToModel(persisted);
};

const updateToDo = async (item) =&amp;gt; {
  if (!item.id) {
    throw new Error("InvalidParameterException: id attribute is required");
  }
  const itemToUpdate = await mapper.getById(item.id);
  itemToUpdate.ModifiedAt = Date.now();
  itemToUpdate.Title = item.title;
  itemToUpdate.CompletedAt = item.isCompleted === true ? Date.now() : undefined;

  const persisted = await mapper.put(itemToUpdate);
  return transformToModel(itemToUpdate);
};

exports.handler = async (event) =&amp;gt; {
  if (event.requestContext.httpMethod === "POST") {
    const newItem = await createToDoItem(JSON.parse(event.body));
    return {
      statusCode: 200,
      body: JSON.stringify(newItem),
    };
  }

  if (event.requestContext.httpMethod === "PUT") {
    const id = event.pathParameters.id;
    const requestPayload = JSON.parse(event.body);
    const updatedItem = await updateToDoItem({ ...requestPayload, id });
    return {
      statusCode: 200,
      body: JSON.stringify(updatedItem),
    };
  }
  return {
    statusCode: 405,
    body: "Method not supported",
  };
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can find a full example in my GitHub &lt;a href="https://github.com/sr-shifu/app-composer-demo"&gt;&lt;em&gt;repo&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running locally
&lt;/h3&gt;

&lt;p&gt;Now let’s try to run our application locally before deploying it to production. SAM already comes with &lt;code&gt;start-api&lt;/code&gt; command that will start local API Gateway instance routing requests to local Lambda runtimes. However, we need to persist our data somewhere. The simplest solution would be to connect our local Lambdas to DynamoDB running in the cloud (for instance, if you have some staging environment that replicates production). But for our example, let's assume we don't have any environment setup yet and try to run in-memory DynamoDB database locally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 8000:8000 amazon/dynamodb-local
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here comes the first challenge: SAM also uses Docker to run local API Gateway and Lambda functions, and the docker container is &lt;strong&gt;NOT&lt;/strong&gt; running the DynamoDB Local process &lt;strong&gt;INSIDE&lt;/strong&gt; the container (&lt;em&gt;localhost&lt;/em&gt;) — any request to &lt;code&gt;http://localhost:8000&lt;/code&gt; within Lambda function will fail.&lt;/p&gt;

&lt;p&gt;The fix is simple — create a Docker network and explicitly specify it for both SAM and DynamoDB containers!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker network create sam-demo-net
docker run -p 8000:8000 --network sam-demo-net --name ddblocal amazon/dynamodb-local
sam local start-api --env-vars json/env.json --docker-network sam-demo-net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can use Docker’s service discovery feature and access DynamoDB local endpoint using the container name (&lt;code&gt;ddblocal&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const ddbClient = new DynamoDb({
  ...(process.env.AWS_SAM_LOCAL === "true"
    ? { endpoint: "http://ddblocal:8000" }
    : {}),
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See full launch instructions in the &lt;a href="https://github.com/sr-shifu/app-composer-demo/blob/main/README.md"&gt;README file&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now it’s time to test it!&lt;/p&gt;

&lt;p&gt;Insert a todo item in the table by executing the following CURL command at the prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST -d '{"title":"test ToDo"}' http://127.0.0.1:3000/todos

{"id":"25962e09-7f16-4ab9-ac88-64f8c4a20710","title":"test ToDo","isCompleted":false}%    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s retrieve todo items from the local DynamoDB instance by executing the following CURL command at the prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://127.0.0.1:3000/todos 

[{"id":"25962e09-7f16-4ab9-ac88-64f8c4a20710","title":"test ToDo","isCompleted":false}]% 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, let’s complete our todo item:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X PUT -d '{"title":"test ToDo (completed)", "isCompleted": true}' http://127.0.0.1:3000/todos/25962e09-7f16-4ab9-ac88-64f8c4a20710

{"id":"25962e09-7f16-4ab9-ac88-64f8c4a20710","title":"test ToDo (completed)","isCompleted":true}% 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All works — great! Let’s deploy!&lt;/p&gt;

&lt;h3&gt;
  
  
  Time to launch!
&lt;/h3&gt;

&lt;p&gt;With AWS SAM, deployment is as simple as running one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;--guided&lt;/code&gt; flag will launch a wizard that will help you to configure deployment options (AWS CloudFormation stack name, AWS regions, etc). Once you complete this wizard the very first time, you will be offered to save this deployment setup and re-use it in the upcoming deployments.&lt;/p&gt;

&lt;p&gt;As a result of this operation, SAM will transform its template to a format compatible with CloudFormation, and create a stack based on it in the account you specified in your AWS SDK config.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final thoughts
&lt;/h3&gt;

&lt;p&gt;It would be exciting to see how this service will evolve in the future. Will it introduce a new approach that will extend Infrastructure as Code (IaC) — Infrastructure as Diagram (IaD)? At least, would be nice to see in-depth integration with SAM APIs (deploy, package, pipeline bootstrap) that will give users a “native” way to provision AWS infrastructure directly from AWS Application Composer Console.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://thesametech.com/infra-as-diagram/"&gt;&lt;em&gt;https://thesametech.com&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on January 4, 2023.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can also&lt;/em&gt; &lt;a href="https://twitter.com/mastershifu89"&gt;&lt;em&gt;follow me on Twitter&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://t.me/the_same_tech"&gt;&lt;em&gt;subscribe to my Telegram channel&lt;/em&gt;&lt;/a&gt;, &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/isharafeev/"&gt;&lt;em&gt;connect on LinkedIn&lt;/em&gt;&lt;/a&gt; &lt;em&gt;to get notifications about new posts!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>node</category>
      <category>infrastructure</category>
      <category>serverless</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
