<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Caio Borghi</title>
    <description>The latest articles on DEV Community by Caio Borghi (@ocodista).</description>
    <link>https://dev.to/ocodista</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ocodista"/>
    <language>en</language>
    <item>
      <title>React Query - why does it matter?</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Mon, 01 Sep 2025 17:10:06 +0000</pubDate>
      <link>https://dev.to/ocodista/react-query-why-does-it-matter-43j4</link>
      <guid>https://dev.to/ocodista/react-query-why-does-it-matter-43j4</guid>
      <description>&lt;p&gt;It avoids useEffect hell and handles: request state management, caching, refetching, retrying, &lt;em&gt;"suspending"&lt;/em&gt; and error treatment; out of the box.&lt;/p&gt;

&lt;p&gt;It helps with &lt;em&gt;Asynchronous State&lt;/em&gt; management.&lt;/p&gt;

&lt;h2&gt;
  
  
  useEffect hell
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://react.dev/learn/you-might-not-need-an-effect" rel="noopener noreferrer"&gt;You probably don't need useEffect&lt;/a&gt;, specially for handling requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code difference
&lt;/h2&gt;

&lt;h3&gt;
  
  
  bad
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;UniverseList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;universes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setUniverses&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;controller&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AbortController&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;loadUniverses&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/universes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;signal&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Error: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; - &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="nf"&gt;setUniverses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;jsonResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AbortError&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to load universes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nf"&gt;loadUniverses&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;controller&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;abort&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="na"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;))}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;UniverseList&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  good
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Suspense&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useSuspenseQuery&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@tanstack/react-query&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ErrorBoundary&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-error-boundary&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;FIVE_MINUTES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TEN_MINUTES&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fetchUniverses&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/universes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ok&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Error: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; - &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jsonResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;jsonResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;UniverseListContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;universes&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSuspenseQuery&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;queryKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;universes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;queryFn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fetchUniverses&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;staleTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FIVE_MINUTES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;gcTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TEN_MINUTES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;retryDelay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;attemptIndex&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;attemptIndex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;))}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ErrorFallback&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;resetErrorBoundary&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;role&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;alert&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Something&lt;/span&gt; &lt;span class="nx"&gt;went&lt;/span&gt; &lt;span class="nx"&gt;wrong&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/p&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;pre&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/pre&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;button&lt;/span&gt; &lt;span class="nx"&gt;onClick&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;resetErrorBoundary&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Try&lt;/span&gt; &lt;span class="nx"&gt;again&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/button&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;UniverseListWithSuspense&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ErrorBoundary&lt;/span&gt;
      &lt;span class="nx"&gt;FallbackComponent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;ErrorFallback&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;onReset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reload&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Suspense&lt;/span&gt; &lt;span class="nx"&gt;fallback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt; &lt;span class="nx"&gt;universes&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;}&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UniverseListContent&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Suspense&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ErrorBoundary&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nx"&gt;UniverseListWithSuspense&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why bad?
&lt;/h3&gt;

&lt;p&gt;Repeated code, required to manage async state, will spread as garden weeds as the project scales.&lt;/p&gt;

&lt;p&gt;If you're not willing using React Query, at least create your own decoupled hooks and make sure to properly test them.&lt;/p&gt;

&lt;h2&gt;
  
  
  caching
&lt;/h2&gt;

&lt;p&gt;React Query can cache your endpoints results and expire then after a configurable expiration time.&lt;/p&gt;

&lt;p&gt;It also allows you to tie &lt;em&gt;tags&lt;/em&gt; with &lt;em&gt;queries&lt;/em&gt; and invalidate them on &lt;em&gt;mutations&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  refetching
&lt;/h2&gt;

&lt;p&gt;As easy as calling a function, as it should be.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;UniverseListContent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;universes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;refetch&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSuspenseQuery&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;queryKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;universes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;queryFn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;fetchUniverses&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;staleTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;FIVE_MINUTES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;gcTime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TEN_MINUTES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;retryDelay&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;attemptIndex&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="nx"&gt;attemptIndex&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;button&lt;/span&gt; &lt;span class="nx"&gt;onClick&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;refetch&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Reload&lt;/span&gt; &lt;span class="nx"&gt;All&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/button&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;universe&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;))}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>react</category>
      <category>javascript</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Go 1.24 uses Swiss Tables, what are they?</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Thu, 20 Feb 2025 01:13:28 +0000</pubDate>
      <link>https://dev.to/ocodista/go-124-uses-swiss-table-what-are-they-3c2l</link>
      <guid>https://dev.to/ocodista/go-124-uses-swiss-table-what-are-they-3c2l</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;
The Old Map

&lt;ul&gt;
&lt;li&gt;
Chaining

&lt;ul&gt;
&lt;li&gt;Practical Example&lt;/li&gt;
&lt;li&gt;Problem&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

Swiss Table

&lt;ul&gt;
&lt;li&gt;Linear Probing&lt;/li&gt;
&lt;li&gt;Steroids (SSE3)&lt;/li&gt;
&lt;li&gt;Metadata&lt;/li&gt;
&lt;li&gt;Problem&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Elastic Hashing

&lt;ul&gt;
&lt;li&gt;What is it?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;li&gt;References&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In v1.24, Go replaced its &lt;em&gt;Map&lt;/em&gt; implementation with a new version of hash table called &lt;em&gt;Swiss Table&lt;/em&gt;, or &lt;em&gt;flat_hash_map&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Swiss Table?&lt;/strong&gt;&lt;br&gt;
A Swiss Table is a map, that uses a cache-friendly, more efficient (with shorter memory footprint) approach that makes comparisons and insertions faster.&lt;/p&gt;

&lt;p&gt;It also uses a different strategy for addressing collisions: &lt;em&gt;linear probing on steroids&lt;/em&gt; over the previous strategy &lt;em&gt;chaining&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When working with hash-tables, one thing is certain: &lt;strong&gt;there will be conflict&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8e1fmhjwr0zffytumc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8e1fmhjwr0zffytumc7.png" alt="bytes conflict image" width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Old Map
&lt;/h2&gt;

&lt;p&gt;The previous implementation was highly tuned for memory efficiency and performance.&lt;/p&gt;

&lt;p&gt;Go team is awesome.&lt;/p&gt;
&lt;h3&gt;
  
  
  Chaining
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;How does it work?&lt;/strong&gt; It pre-allocates memory in the form of &lt;em&gt;buckets&lt;/em&gt;, where each bucket can have up to 8 &lt;em&gt;key-value&lt;/em&gt; pairs. When a bucket is full (or half-full), the algorithm allocates a new &lt;em&gt;overflow bucket&lt;/em&gt; as a linked list, in a process known as &lt;em&gt;chaining&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;As the table approaches a high rate of &lt;em&gt;load factor&lt;/em&gt;*, the runtime moves all entries to a new memory address block (usually twice as big as the previous one), this is known as &lt;em&gt;rehashing&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;* Load Factor = used positions / total capacity&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;In traditional chaining, whenever there is a conflict, the runtime allocates memory for a new &lt;em&gt;node&lt;/em&gt;/&lt;em&gt;key-value pair&lt;/em&gt; and stores it as a linked-list.&lt;/p&gt;

&lt;p&gt;I'll focus on how &lt;em&gt;chaining&lt;/em&gt; works for nodes and leave the &lt;em&gt;buckets&lt;/em&gt; strategy explanation for the &lt;a href="https://github.com/golang/go/blob/master/src/runtime/map_noswiss.go#L365" rel="noopener noreferrer"&gt;Go 1.23 Map Implementation&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  Practical Example
&lt;/h4&gt;

&lt;p&gt;Imagine a hash function that, based on a string, returns an &lt;strong&gt;index&lt;/strong&gt; (offset value) to a fixed memory address:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;x = len(x)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Usually, these functions try to produce an &lt;a href="https://www.geeksforgeeks.org/avalanche-effect-in-cryptography/" rel="noopener noreferrer"&gt;&lt;em&gt;avalanche effect&lt;/em&gt;&lt;/a&gt; to distribute the results "evenly" over the map memory interval.&lt;/p&gt;

&lt;p&gt;Our hashing function is more vulnerable to conflicts, as it only returns the size of the string to define the address.&lt;/p&gt;

&lt;p&gt;Now, let's assume we add the following keys:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;key&lt;/th&gt;
&lt;th&gt;index&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JS&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PHP&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;And this table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;index&lt;/th&gt;
&lt;th&gt;key&lt;/th&gt;
&lt;th&gt;memory&lt;/th&gt;
&lt;th&gt;next&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;JS&lt;/td&gt;
&lt;td&gt;1008&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;PHP&lt;/td&gt;
&lt;td&gt;1016&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;PERL&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;When we add a new "Go" key, we caused a conflict.&lt;/p&gt;

&lt;p&gt;As the index of "Go" is 2 and there is already "JS" at position 2, the strategy will allocate a new &lt;em&gt;node&lt;/em&gt; in the available memory and point the &lt;em&gt;next&lt;/em&gt; prop of "JS" to "Go".&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;index&lt;/th&gt;
&lt;th&gt;key&lt;/th&gt;
&lt;th&gt;memory&lt;/th&gt;
&lt;th&gt;next&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;JS&lt;/td&gt;
&lt;td&gt;1008&lt;/td&gt;
&lt;td&gt;2064&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;PHP&lt;/td&gt;
&lt;td&gt;1016&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;PERL&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;__&lt;/td&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;2064&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So we have keys, that are used as inputs in a &lt;em&gt;hash function&lt;/em&gt;, that generates an &lt;em&gt;index&lt;/em&gt; (&lt;em&gt;or an offset for a fixed memory address/start of the list&lt;/em&gt;) and the &lt;em&gt;keys&lt;/em&gt; or &lt;em&gt;nodes&lt;/em&gt; or &lt;em&gt;buckets&lt;/em&gt; are inserted as linked-lists.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouytnpiflwk79o58jdo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fouytnpiflwk79o58jdo2.png" alt="Chaining table representation" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Problem
&lt;/h4&gt;

&lt;p&gt;As you can see, &lt;strong&gt;each new node is placed far from it's conflict key in the memory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This means that, when searching for a key that has conflicts (&lt;em&gt;Go&lt;/em&gt;, for example), the processor will fetch sparse addresses in memory.&lt;/p&gt;

&lt;p&gt;Even though the Go previous implementation used a lot of performance improvement techniques (as using 8-sized buckets instead of single-nodes) and partially comparing keys (7 bits instead of 64), &lt;strong&gt;this chaining approach is not cache-friendly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You can check the specific details &lt;a href="https://github.com/golang/go/blob/master/src/runtime/map_noswiss.go#L365" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Swiss Table
&lt;/h2&gt;

&lt;p&gt;It's a cleaver implementation that uses 1 byte metadata and &lt;em&gt;linear probing&lt;/em&gt; on steroids.&lt;/p&gt;
&lt;h3&gt;
  
  
  Linear Probing
&lt;/h3&gt;

&lt;p&gt;No more dynamic memory allocation for &lt;em&gt;buckets&lt;/em&gt; with &lt;em&gt;linked lists&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Consider this table:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;index&lt;/th&gt;
&lt;th&gt;key&lt;/th&gt;
&lt;th&gt;memory&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;1008&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;JS&lt;/td&gt;
&lt;td&gt;1016&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;PHP&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;PERL&lt;/td&gt;
&lt;td&gt;1032&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;td&gt;1040&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;td&gt;...&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;N&lt;/td&gt;
&lt;td&gt;nil&lt;/td&gt;
&lt;td&gt;N_ADDRESS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A better visualization in this case is to look at the has table as a, well, _flat_hash_map.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff0nyxu11bc99qd9c9ll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fff0nyxu11bc99qd9c9ll.png" alt="Flat Hash Map visualization" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a conflict occurs, the algorithm will &lt;em&gt;linearly&lt;/em&gt; search for the next position, one by one.&lt;/p&gt;

&lt;p&gt;For adding "Go", we have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;hash("Go") = 2&lt;/li&gt;
&lt;li&gt;See slot 1016 and realize it is filled.&lt;/li&gt;
&lt;li&gt;See slot 1024 and realize it its filled.&lt;/li&gt;
&lt;li&gt;See slot 1032 and realize it its filled.&lt;/li&gt;
&lt;li&gt;Slot 1040 is open, use it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's only on slot 1040 that will find an open spot, this is where the "Go" key goes.&lt;/p&gt;

&lt;p&gt;The important thing to notice is that this time, collisions are located &lt;strong&gt;nearby&lt;/strong&gt; each other, making them &lt;strong&gt;cache friendly&lt;/strong&gt;, which speeds things up a little bit.&lt;/p&gt;
&lt;h3&gt;
  
  
  Steroids (SSE3)
&lt;/h3&gt;

&lt;p&gt;Streaming &lt;a href="https://en.wikipedia.org/wiki/Single_instruction,_multiple_data" rel="noopener noreferrer"&gt;SIMD&lt;/a&gt; Extensions 3 is a kind of processor instruction that, combined with curated software engineering techniques such as &lt;em&gt;bit manipulation&lt;/em&gt;, provides a way to perform parallel reads of memory addresses with a single instruction.&lt;/p&gt;

&lt;p&gt;Which means that, when probing for collisions, the Swiss Table could perform up to 16 checks at once, &lt;em&gt;even though Golang seems to be using just 8&lt;/em&gt;, it's way better than checking one by one!&lt;/p&gt;
&lt;h3&gt;
  
  
  Metadata
&lt;/h3&gt;

&lt;p&gt;Swiss Table uses a metadata byte to partially store the hashed key, enabling quick comparison (7 bit instead of 64).&lt;/p&gt;

&lt;p&gt;With &lt;em&gt;linear probing&lt;/em&gt; (also known as &lt;em&gt;open addressing&lt;/em&gt;) + &lt;em&gt;metadata chunking&lt;/em&gt; + &lt;em&gt;SSE3&lt;/em&gt;, &lt;em&gt;comparisons&lt;/em&gt; and &lt;em&gt;inserts&lt;/em&gt; are faster and consume less memory than the previous implementation.&lt;/p&gt;
&lt;h2&gt;
  
  
  1.23.4 vs 1.24
&lt;/h2&gt;

&lt;p&gt;I ran &lt;a href="https://www.bytesizego.com/blog/go-124-swiss-table-maps" rel="noopener noreferrer"&gt;this article benchmark script&lt;/a&gt; on my laptop.&lt;/p&gt;

&lt;p&gt;The test was:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create and populate a &lt;code&gt;1_000_000&lt;/code&gt; items map &lt;/li&gt;
&lt;li&gt;Performs &lt;code&gt;10_000_000&lt;/code&gt; lookups using mod indexing.&lt;/li&gt;
&lt;li&gt;Inserts &lt;code&gt;1_000_000&lt;/code&gt; new entries into the map.&lt;/li&gt;
&lt;li&gt;Removes the first &lt;code&gt;1_000_000&lt;/code&gt; from the map.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Operation&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Go 1.23 (ms)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Go 1.24 (ms)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Improvement (%)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lookup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;287.340375&lt;/td&gt;
&lt;td&gt;184.787167&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;35.67% faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Insertion&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;118.564333&lt;/td&gt;
&lt;td&gt;66.4095&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;43.99% faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deletion&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;39.893875&lt;/td&gt;
&lt;td&gt;61.364875&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;-53.85% slower&lt;/strong&gt;&lt;br&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In &lt;a href="https://medium.com/@lordmoma/go-1-24s-swiss-tables-the-silver-bullet-or-just-a-shiny-new-gadget-8e5f7f37c2a8" rel="noopener noreferrer"&gt;Go 1.24’s Swiss Tables: The Silver Bullet or Just a Shiny New Gadget?&lt;/a&gt; article, we can also see the efficiency of Swiss Tables in Memory Usage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhquhnbo3go9jafhdp37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhquhnbo3go9jafhdp37.png" alt="Go 1.24’s Swiss Tables: The Silver Bullet or Just a Shiny New Gadget? memory comparison chart" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Problem
&lt;/h3&gt;

&lt;p&gt;Even though the Go Swiss Table uses buckets and a sparse hash function (for the avalanche effect) to distribute keys evenly over the pre-allocated memory block, as the table get's full, conflicts will generate &lt;em&gt;clustered&lt;/em&gt; sections that will increase search time.&lt;/p&gt;

&lt;p&gt;It's easy to see this happening with a 10 positions table:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Table: [nil, C, JS, PHP, PERL, nil, nil, nil, nil, nil]

Adding a new "B" key would cause:
- len(B) = 1
- position 1 is occupied by "C", look next
- position 2 is occupied by "JS", look next
- position 3 is occupied by "PHP", look next
- position 4 is occupied by "PRL", look next
- position 4 is free, add "B"

Table: [nil, C, JS, PHP, PERL, B, nil, nil, nil, nil]
Index:   0,  1,  2,  3,   4,   5,   6,   7,   8,   9  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, any &lt;em&gt;hash function&lt;/em&gt; call that returned 1, 2, 3, 4 or 5 would require searching up to 4 positions, piece of cake for the &lt;em&gt;SSE3&lt;/em&gt;, I recognize, but what if there was a &lt;strong&gt;better way?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Elastic Hashing
&lt;/h2&gt;

&lt;p&gt;In January 2025, a new paper regarding &lt;em&gt;open addressing&lt;/em&gt; was launched suggesting a new approach, theoretically better than &lt;em&gt;linear probing&lt;/em&gt;, called &lt;strong&gt;Elastic Hashing&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Feel free to read the &lt;a href="https://www.quantamagazine.org/undergraduate-upends-a-40-year-old-data-science-conjecture-20250210/" rel="noopener noreferrer"&gt;news article&lt;/a&gt; or &lt;a href="https://arxiv.org/abs/2501.02305" rel="noopener noreferrer"&gt;the paper&lt;/a&gt; to get the full context.&lt;/p&gt;

&lt;p&gt;It claims to beat a 40-years old conjecture (&lt;a href="https://en.wikipedia.org/wiki/Yau%27s_conjecture" rel="noopener noreferrer"&gt;Yao's conjecture&lt;/a&gt;), that defends linear probing as a simple, with near-optimal efficiency, that doesn't degrade catastrophically as load increases.&lt;/p&gt;

&lt;p&gt;Krapivin discovered the new strategy while being unaware of Yao's conjecture, which indicates we should challenge &lt;em&gt;known constraints&lt;/em&gt; more often.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is it?
&lt;/h3&gt;

&lt;p&gt;Basically, instead of checking positions one by one, or 16 by 16 (as our beloved &lt;em&gt;Swiss Tables&lt;/em&gt; do), it created a new bidimensional strategy to calculate the insert address using &lt;em&gt;virtual overflow buckets&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The idea is, there is a new function named &lt;strong&gt;φ(i, j)&lt;/strong&gt; that returns a position of a node, where:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;i = Primary Bucket (Hash Result)&lt;/li&gt;
&lt;li&gt;j = Overflow Count&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, using our previous hash function, both keys "JS" and "Go" would return 2 as their "Primary Bucket".&lt;/p&gt;

&lt;p&gt;The insertion order would determine if the key would be placed at position φ(2, 1) = 2 or maybe φ(2, 2) = 7.&lt;/p&gt;

&lt;h3&gt;
  
  
  Magic
&lt;/h3&gt;

&lt;p&gt;The magic lies in the &lt;strong&gt;φ&lt;/strong&gt; function, that is able to &lt;em&gt;virtually create overflow buckets&lt;/em&gt; for collisions_. It outperforms the complexity algorithm for &lt;em&gt;linear probing&lt;/em&gt; for worst and average cases, with these &lt;em&gt;jumps&lt;/em&gt; or &lt;em&gt;wormholes&lt;/em&gt;, it probes less addresses, leading to a better &lt;em&gt;theoretical&lt;/em&gt; &lt;em&gt;insert&lt;/em&gt; performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymgt0e322w2o4ahur9fr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymgt0e322w2o4ahur9fr.png" alt="Jumps" width="800" height="794"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Will &lt;strong&gt;Elastic Hashing&lt;/strong&gt; be applied to Swiss Tables? &lt;br&gt;
Maybe. I hope so. Time will tell.&lt;/p&gt;

&lt;p&gt;I still have open questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Will elastic hashing prove itself faster than linear probing + SSE3?&lt;/li&gt;
&lt;li&gt;Can Elastic Hash benefit itself from the parallel reads of SIMD?&lt;/li&gt;
&lt;li&gt;Which language will be the first to implement this new algorithm?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you, by any chance, happens to cross some of these answers out there, leave it a comment!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It's nice to experience these latest improvements to Hash Table efficiency as they happen, in real time.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;one-month&lt;/em&gt; old paper improving the code complexity of one aspect of the core data structure algorithm is great news! &lt;/p&gt;

&lt;p&gt;It shows that there are no boundaries for performance improvements, not even decade-old rules are safe, no matter where you are, &lt;strong&gt;there is always room for improvement&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Cheers.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.bytesizego.com/blog/go-124-swiss-table-maps" rel="noopener noreferrer"&gt;Maps are faster in Go 1.24 - Matt Boyle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/@lordmoma/go-1-24s-swiss-tables-the-silver-bullet-or-just-a-shiny-new-gadget-8e5f7f37c2a8" rel="noopener noreferrer"&gt;# Go 1.24’s Swiss Tables: The Silver Bullet or Just a Shiny New Gadget? - David Lee&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.quantamagazine.org/undergraduate-upends-a-40-year-old-data-science-conjecture-20250210/" rel="noopener noreferrer"&gt;Undergraduate upends a 40 year old data science conjecture - Quanta Magazine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://arxiv.org/abs/2501.02305" rel="noopener noreferrer"&gt;Optimal Bounds for Open Addressing Without Reordering - Martin Farach-Colton, Andrew Kapivin, William Kuszmaul&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>go</category>
      <category>softwareengineering</category>
      <category>programming</category>
      <category>news</category>
    </item>
    <item>
      <title>Benchmarking DeepSeek R1 on a Developer’s MacBook</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Tue, 04 Feb 2025 20:00:42 +0000</pubDate>
      <link>https://dev.to/ocodista/deepseek-r1-7bs-performance-on-a-developers-macbook-3mg2</link>
      <guid>https://dev.to/ocodista/deepseek-r1-7bs-performance-on-a-developers-macbook-3mg2</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Introduction

&lt;ul&gt;
&lt;li&gt;What's the point?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

The Prompt

&lt;ul&gt;
&lt;li&gt;What is Time?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The Answer&lt;/li&gt;

&lt;li&gt;

How were the metrics gathered?

&lt;ul&gt;
&lt;li&gt;Sequence Diagram&lt;/li&gt;
&lt;li&gt;Process Monitor&lt;/li&gt;
&lt;li&gt;GPU Monitor&lt;/li&gt;
&lt;li&gt;Requests Metrics&lt;/li&gt;
&lt;li&gt;Hardware Specification&lt;/li&gt;
&lt;li&gt;Tools&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Benchmark Results

&lt;ul&gt;
&lt;li&gt;Waiting Time (TTFB)&lt;/li&gt;
&lt;li&gt;
Velocity

&lt;ul&gt;
&lt;li&gt;Comparing different token/s speed&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Acceptable Thresholds&lt;/li&gt;

&lt;li&gt;Throughput + Wait Time&lt;/li&gt;

&lt;li&gt;Duration&lt;/li&gt;

&lt;li&gt;Combined Metrics (Tokens/s x Duration x Wait Time)&lt;/li&gt;

&lt;li&gt;GPU Usage&lt;/li&gt;

&lt;li&gt;RAM/CPU/Threads Usage&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

Summarized Results

&lt;ul&gt;
&lt;li&gt;How many tokens/s can I get running DeepSeek R1 Gwen 7B locally with ollama?&lt;/li&gt;
&lt;li&gt;How many parallel requests can I serve with reasonable throughput?&lt;/li&gt;
&lt;li&gt;What is a reasonable throughput?&lt;/li&gt;
&lt;li&gt;How does the number of concurrent requests impact the performance?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hey there,&lt;/p&gt;

&lt;p&gt;With the new hype of AI (Jan 2025) over DeepSeek's high-quality open-source models, an urge to explore self-hosted LLM models infected my mind. &lt;/p&gt;

&lt;p&gt;Therefore, I decided to build a &lt;em&gt;stress test&lt;/em&gt; benchmarking tool with Go (Channels 💙), fire it against Ollama &amp;amp; DeepSeek, monitor a bunch of metrics, and share the results with you.&lt;/p&gt;

&lt;p&gt;This post analyzes the throughput capacity of &lt;a href="https://huggingface.co/Qwen/Qwen2.5-Math-7B" rel="noopener noreferrer"&gt;DeepSeek-R1-Distill-Qwen-7B&lt;/a&gt; model running on &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;, at my personal dev MacBook and M2 Pro with 16GB Ram and a 19-Core GPU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9m3zhk37vyqqa5bh9ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9m3zhk37vyqqa5bh9ic.png" alt="A MacBook M2 Pro 16GB Ram full of stickers on a wood table" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh, the project is open source and can be found at &lt;a href="https://github.com/ocodista/benchmark-deepseek-r1-7b/tree/main" rel="noopener noreferrer"&gt;ocodista/benchmark-deepseek-r1-7b&lt;/a&gt; on GitHub (drop a ⭐ if you think this type of content is useful 😁✌️).&lt;/p&gt;

&lt;h3&gt;
  
  
  What's the point?
&lt;/h3&gt;

&lt;p&gt;I wanted to see how many parallel requests my M2 could handle at a decent &lt;em&gt;velocity&lt;/em&gt; and experiment with Go + Cursor + Claude Sonnet 3.5. &lt;/p&gt;

&lt;p&gt;It was a great experience and although the majority of the &lt;strong&gt;code&lt;/strong&gt; was written by AI, none of the docs (or this text) was. &lt;/p&gt;

&lt;p&gt;You can expect this experiment to answer the following questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many tokens/s can I get running DeepSeek R1 Gwen 7B locally with &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;ollama&lt;/a&gt;?&lt;/li&gt;
&lt;li&gt;How many parallel requests can I serve with reasonable throughput? 

&lt;ul&gt;
&lt;li&gt;What is a reasonable throughput?&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;How does the number of concurrent requests impact the throughput?&lt;/li&gt;

&lt;li&gt;How much power did my GPU used while running this study?&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Now let's talk about the test.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Prompt
&lt;/h2&gt;

&lt;p&gt;Inspired by a recent book I've read (&lt;em&gt;A Universe From Nothing&lt;/em&gt;), I selected the following question as the prompt for each request:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What is the philosophical definition of time?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Which is a very profound question that requires some reasoning. It is useful to analyze the &lt;em&gt;Chain of Thought&lt;/em&gt; process of DeepSeek R1, as it is a core feature of the model to return the answer in 2-steps: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;think&amp;gt;{THINK}&amp;lt;/think&amp;gt;&lt;/code&gt; and &lt;code&gt;{RESPONSE}&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Time?
&lt;/h3&gt;

&lt;p&gt;Time has &lt;em&gt;abstract (the final end, the first beginning)&lt;/em&gt; and &lt;em&gt;structured (seconds, minutes, hours)&lt;/em&gt;  definitions.&lt;/p&gt;

&lt;p&gt;It can be used to express a relation between unrelated events, to represent something we feel (the passage of time) when we access our memories, and to wonder about the big mysteries of the universe: Where do we come from? Where are we going?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyi8agnviv8wyueamvc12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyi8agnviv8wyueamvc12.png" alt="A clock similar to Raul Seixas wondering about time limits" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I am the &lt;em&gt;beginning&lt;/em&gt;, the &lt;em&gt;end&lt;/em&gt; and &lt;em&gt;the middle&lt;/em&gt;.&lt;br&gt;
— Raul Seixas &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The selected structured representation of time was &lt;em&gt;min:seconds&lt;/em&gt; and we'll analyze &lt;em&gt;Waiting Time&lt;/em&gt; and &lt;em&gt;Duration Time&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Answer
&lt;/h2&gt;

&lt;p&gt;&lt;a href="//./what-is-the-philosophical-definition-of-time.md"&gt;Here&lt;/a&gt; you can see one of the responses generated by the 7B model during one of the tests.&lt;/p&gt;

&lt;p&gt;My opinion (as a Software Engineer, not a philosopher nor a physicist) is that is pretty good.&lt;/p&gt;

&lt;p&gt;It's marvelous to read the &lt;em&gt;&amp;lt;think&amp;gt;&lt;/em&gt; section and observe how the model groups multiple subjects related to the question before providing a final answer.&lt;/p&gt;

&lt;p&gt;I'm not a Data Science expert, so I can't explain the under-the-hood workings, but it &lt;strong&gt;appears&lt;/strong&gt; to reuse this first exploration prompt to re-prompt the model.&lt;/p&gt;

&lt;p&gt;This is revolutionary, as it's the first time I've seen this two-step answer strategy built inside the model.&lt;/p&gt;

&lt;p&gt;The strategy itself isn't necessarily new, as I've manually used it before with a Custom GPT called &lt;a href="./https://chatgpt.com/g/g-4uKamI8cT-prompt-optimizer"&gt;Prompt Optimizer&lt;/a&gt;, a kind of pre-prompting to get better final prompts, it is specially helpful when generating images from text.&lt;/p&gt;

&lt;p&gt;Anyway, this is pretty cool! &lt;/p&gt;

&lt;p&gt;The difference in quality between DeepSeek R1 (full model) and ChatGPT for small prompts is noticeable. &lt;/p&gt;

&lt;p&gt;This automatic &lt;em&gt;context-universe-expansion&lt;/em&gt; also eliminates the growing need to be good at &lt;em&gt;Prompt Engineering&lt;/em&gt;. It now comes for free, inside the model.&lt;/p&gt;

&lt;p&gt;So, returning to the test 😁&lt;/p&gt;

&lt;p&gt;If you don't care about how data was collected, you can time-travel to the Benchmark Results and watch some good-looko charts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How were the metrics gathered?
&lt;/h2&gt;

&lt;p&gt;The idea was to execute multiple rounds of parallel HTTP requests to the Ollama Web Server endpoint (1, 2, 4, 8, 16, 19, 32, 38, 57, 64,76, 95, 128 and 256).&lt;/p&gt;

&lt;p&gt;Since my GPU has 19 cores, I selected &lt;strong&gt;19&lt;/strong&gt; as one of the rounds (and a few other multiples) to ensure each GPU Core is busy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjdmxkfhyq2ilkuhlxj3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwjdmxkfhyq2ilkuhlxj3.png" alt="Mean Gopher" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Sequence Diagram
&lt;/h3&gt;

&lt;p&gt;The test goes in cycles, each cycle containing a different set of concurrent requests. &lt;/p&gt;

&lt;p&gt;Each cycle waits for 10s after finishes to start the next one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2hmia5da78oji7ua165.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2hmia5da78oji7ua165.png" alt="Sequence Diagram showcasing how the test works from Client, Ollama and metrics gathering" width="800" height="918"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Process Monitor
&lt;/h3&gt;

&lt;p&gt;It uses &lt;code&gt;pgrep ollama&lt;/code&gt; to find all PIDs involved in running the model requests and will monitor, store and display:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Thread Count&lt;/li&gt;
&lt;li&gt;File Descriptors&lt;/li&gt;
&lt;li&gt;RAM Usage&lt;/li&gt;
&lt;li&gt;CPU Usage&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  GPU Monitor
&lt;/h3&gt;

&lt;p&gt;It uses the awesome tool &lt;a href="https://www.unix.com/man-page/osx/1/powermetrics/" rel="noopener noreferrer"&gt;powermetrics&lt;/a&gt; to calculate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Power (W)&lt;/li&gt;
&lt;li&gt;Frequency (MHz)&lt;/li&gt;
&lt;li&gt;Usage (%)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Over 1s intervals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requests Metrics
&lt;/h3&gt;

&lt;p&gt;For each request of the cycle, it was analyzed the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Throughput (Tokens/s)&lt;/li&gt;
&lt;li&gt;TTFB (Time To First Byte)&lt;/li&gt;
&lt;li&gt;WaitingTime &lt;/li&gt;
&lt;li&gt;TokenCount&lt;/li&gt;
&lt;li&gt;ResponseDuration&lt;/li&gt;
&lt;li&gt;TotalDuration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Hardware Specification
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Specification&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Device&lt;/td&gt;
&lt;td&gt;MacBook Pro 16-inch (M2, 2023)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;12-core ARM-Based Processor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;16GB RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPU&lt;/td&gt;
&lt;td&gt;Integrated M2 Series GPU (19 Cores)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OS&lt;/td&gt;
&lt;td&gt;MacOS Sonoma&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Tools
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Ollama
&lt;/h4&gt;

&lt;p&gt;A local LLM inference framework CLI that's very easy to use.&lt;/p&gt;

&lt;p&gt;For this benchmark, the selected model was &lt;code&gt;deepseek-r1:7b&lt;/code&gt; but it could've been any other, as &lt;code&gt;ollama&lt;/code&gt; makes it ridiculously simple to run LLMs locally.&lt;/p&gt;

&lt;h4&gt;
  
  
  Golang
&lt;/h4&gt;

&lt;p&gt;The chosen programming language, used to create the benchmarking client and monitoring tools.&lt;/p&gt;

&lt;p&gt;Why? Well, Go is an outstanding tool for parallel computing. &lt;/p&gt;

&lt;p&gt;I'm a JS developer (not a Golang expert yet) but I can recognize a great parallel/concurrent tool when I see one. &lt;/p&gt;

&lt;p&gt;Go Scheduler is indeed awesome.&lt;/p&gt;

&lt;h4&gt;
  
  
  Python
&lt;/h4&gt;

&lt;p&gt;There is nothing better than Python to analyze a bunch of CSV files and generate beautiful charts.&lt;/p&gt;

&lt;p&gt;For instructions on how to run this benchmark, please check &lt;a href="https://github.com/ocodista/benchmark-deepseek-r1-7b/blob/main/how-to-run.md" rel="noopener noreferrer"&gt;how-to-run.md&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark Results
&lt;/h2&gt;

&lt;p&gt;As we are using HTTP requests as the traffic method of this experiment, I decided to use &lt;a href="./https://web.dev/articles/ttfb?hl=pt-br"&gt;Time To First Byte&lt;/a&gt; to represent the &lt;em&gt;Waiting Time&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Waiting Time (TTFB)
&lt;/h3&gt;

&lt;p&gt;The &lt;strong&gt;waiting time (TTFB)&lt;/strong&gt; is the delay between hitting the &lt;strong&gt;Enter&lt;/strong&gt; key and seeing the first character on your screen.&lt;/p&gt;

&lt;p&gt;Since in this experiment the &lt;code&gt;client&lt;/code&gt; and the &lt;code&gt;server&lt;/code&gt; are on the same network, hardware and computer, we are actually calculating the time it takes for the &lt;strong&gt;Golang&lt;/strong&gt; process to communicate with the &lt;strong&gt;Ollama&lt;/strong&gt; process, that will be running the &lt;strong&gt;DeepSeek R1&lt;/strong&gt; model, that will generate the responses and stream it back to &lt;strong&gt;Golang&lt;/strong&gt; process.&lt;br&gt;
With that said, let's contemplate some colored charts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbzq4ifnr1u1nxqodx87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbzq4ifnr1u1nxqodx87.png" alt="Time To First Byte 1 - 256 Requests" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok, looks interesting, it shows that somewhere near 25 parallel requests the Wait Time spikes aggressively and continues to grow at a stable rate, reaching an unbelievable 50 minutes on p99 for 256 requests/s.&lt;/p&gt;

&lt;p&gt;Let's zoom in:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvq6mqs13v1x0rf51os1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqvq6mqs13v1x0rf51os1.png" alt="Time To First Byte 1 - 32 Requests" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that from cycle 1-19, the TTFB is close to 0, ranging its average from &lt;code&gt;0.62s&lt;/code&gt; to &lt;code&gt;0.83s&lt;/code&gt;, that's pretty much &lt;em&gt;instant&lt;/em&gt; for human perception.&lt;/p&gt;

&lt;p&gt;It makes sense if you take the available amount of GPU cores into consideration, which you should, otherwise the &lt;code&gt;OLLAMA_MAX_PARALLEL&lt;/code&gt; flag will have it's default value (&lt;strong&gt;4&lt;/strong&gt;), and your results will be poisoned (trust me, I've been there). &lt;/p&gt;
&lt;h4&gt;
  
  
  TTFB Table Data
&lt;/h4&gt;

&lt;p&gt;
  TTFB Table Data (only if you care)
  &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parallel&lt;/th&gt;
&lt;th&gt;Avg&lt;/th&gt;
&lt;th&gt;P95&lt;/th&gt;
&lt;th&gt;P99&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0.83&lt;/td&gt;
&lt;td&gt;0.93&lt;/td&gt;
&lt;td&gt;0.93&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;0.36&lt;/td&gt;
&lt;td&gt;0.44&lt;/td&gt;
&lt;td&gt;0.44&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;0.38&lt;/td&gt;
&lt;td&gt;0.42&lt;/td&gt;
&lt;td&gt;0.42&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;0.47&lt;/td&gt;
&lt;td&gt;0.51&lt;/td&gt;
&lt;td&gt;0.51&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;0.64&lt;/td&gt;
&lt;td&gt;0.67&lt;/td&gt;
&lt;td&gt;0.67&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;0.62&lt;/td&gt;
&lt;td&gt;0.64&lt;/td&gt;
&lt;td&gt;0.64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;84.05&lt;/td&gt;
&lt;td&gt;227.13&lt;/td&gt;
&lt;td&gt;234.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;321.96&lt;/td&gt;
&lt;td&gt;761.89&lt;/td&gt;
&lt;td&gt;790.82&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;858.27&lt;/td&gt;
&lt;td&gt;1839.99&lt;/td&gt;
&lt;td&gt;1915.72&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;1247.02&lt;/td&gt;
&lt;td&gt;3078.62&lt;/td&gt;
&lt;td&gt;3211.85&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;/p&gt;

&lt;h3&gt;
  
  
  Velocity
&lt;/h3&gt;

&lt;p&gt;This is the most important metric of the experiment.&lt;/p&gt;

&lt;p&gt;The following chart shows that by running DeepSeek R1 Gwen 7b, self-hosted with Ollama on a MacBook Pro M2 with 16GB RAM and 19 GPU cores, we can achieve a maximum of &lt;strong&gt;55 tokens/s&lt;/strong&gt; when making a single request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipnke6xn9x44bphkrbqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipnke6xn9x44bphkrbqx.png" alt="Throughput Average" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When utilizing the full potential of the GPU with &lt;strong&gt;19&lt;/strong&gt; parallel requests though, the average throughput dropped to mere &lt;strong&gt;9.1 tokens/s&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It kept dropping until the lowest value at &lt;strong&gt;256&lt;/strong&gt; requests with &lt;strong&gt;6.3&lt;/strong&gt; tokens/s.&lt;/p&gt;

&lt;h4&gt;
  
  
  Comparing different token/s speed
&lt;/h4&gt;

&lt;p&gt;It's hard to mentally visualize what 55, 9.1 or 6.3 tokens/s actually means, so I recorded a couple of GIFs to help:&lt;/p&gt;

&lt;h5&gt;
  
  
  55 tokens per second
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z16sbe90p7ymclo2398.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z16sbe90p7ymclo2398.gif" alt="55 tokens per second" width="861" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  30.7 tokens per second
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtzjmmpsanrfbfim04p6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtzjmmpsanrfbfim04p6.gif" alt="30.7 tokens per second" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  9.1 tokens per second
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq1y2lgru9y2gzhybzsn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq1y2lgru9y2gzhybzsn.gif" alt="9.1 tokens per second" width="590" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For me, the ideal speed of a &lt;strong&gt;fast&lt;/strong&gt; application would be around 100 tokens/s, and the slow-but-usable floor limit would be around 20 tokens/s. I mean, faster speed is never enough. It's like internet download/upload or video FPS (Frames Per Second) when rendering games, the higher the better.&lt;/p&gt;

&lt;h5&gt;
  
  
  100 tokens/s
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5nim7rmp68r4oyz2azc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5nim7rmp68r4oyz2azc.gif" alt="100 tokens per second" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Acceptable Thresholds
&lt;/h3&gt;

&lt;p&gt;What should be the acceptable thresholds for an usable real-world application using the DeepSeek + Ollama? &lt;/p&gt;

&lt;p&gt;How much time does an average user waits on a loading application before it quits?&lt;/p&gt;

&lt;p&gt;What is the slowest acceptable speed to read a text without getting bored?&lt;/p&gt;

&lt;h4&gt;
  
  
  Maximum Acceptable Wait Time
&lt;/h4&gt;

&lt;p&gt;I'll choose &lt;strong&gt;10s&lt;/strong&gt; as an arbitrary value for the maximum acceptable wait time.&lt;/p&gt;

&lt;p&gt;In reality, users are more impatient and the value may be much lower.&lt;/p&gt;

&lt;p&gt;Looking at the TTFB table data, 19 is the last cycle that meets our 10s threshold with a 0.62s wait. At 32 parallel requests, the average Waiting Time jumps to 1 minute and 25 seconds.&lt;/p&gt;

&lt;p&gt;Unless you're using DeepSeek for background tasks with no human interaction, a wait time of 1m25s is &lt;em&gt;unacceptable&lt;/em&gt;. Based on the 10s threshold, the maximum parallel requests for a usable app should be &lt;strong&gt;19&lt;/strong&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Minimum Acceptable Response Speed
&lt;/h4&gt;

&lt;p&gt;I believe it should be ~19.9 tokens/s.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn060dw0optqrtj50t10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn060dw0optqrtj50t10.png" alt="19.9 tokens per second" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This metric is totally arbitrary and personally chosen based on how I felt watching the &lt;a href="//./#comparing-different-tokens/s-speed"&gt;speed gifs&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Anything less than 20 tokens/s feels slightly annoying. &lt;/p&gt;

&lt;p&gt;With this new limit, the maximum parallel requests considering acceptable response time is &lt;strong&gt;4&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ok, let's take a look at the combined metrics now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Throughput + Wait Time
&lt;/h3&gt;

&lt;p&gt;Speed is good, but in 2025, Time To First Byte must be minimal in order for a product to be usable.&lt;/p&gt;

&lt;p&gt;No one likes to click a button and wait 20 seconds or 2 minutes for something to happen.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5fbhrcnwbdizbd2sioe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5fbhrcnwbdizbd2sioe.png" alt="Throughput + Wait Time vs Parallel Requests" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Throughput + Wait Time Table Data
&lt;/h4&gt;

&lt;p&gt;
  Table Data
  &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parallel&lt;/th&gt;
&lt;th&gt;Avg t/s&lt;/th&gt;
&lt;th&gt;P95 t/s&lt;/th&gt;
&lt;th&gt;P99 t/s&lt;/th&gt;
&lt;th&gt;Wait(s)&lt;/th&gt;
&lt;th&gt;Errors%&lt;/th&gt;
&lt;th&gt;Duration&lt;/th&gt;
&lt;th&gt;P99 Duration&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;td&gt;53.1&lt;/td&gt;
&lt;td&gt;53.1&lt;/td&gt;
&lt;td&gt;53.1&lt;/td&gt;
&lt;td&gt;0.94&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;00:39.33&lt;/td&gt;
&lt;td&gt;00:39.33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2.0&lt;/td&gt;
&lt;td&gt;30.7&lt;/td&gt;
&lt;td&gt;30.9&lt;/td&gt;
&lt;td&gt;30.9&lt;/td&gt;
&lt;td&gt;0.36&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;01:12.09&lt;/td&gt;
&lt;td&gt;01:12.27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4.0&lt;/td&gt;
&lt;td&gt;19.9&lt;/td&gt;
&lt;td&gt;20.4&lt;/td&gt;
&lt;td&gt;20.5&lt;/td&gt;
&lt;td&gt;1.54&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;01:34.31&lt;/td&gt;
&lt;td&gt;01:47.17&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;19.2&lt;/td&gt;
&lt;td&gt;20.6&lt;/td&gt;
&lt;td&gt;20.7&lt;/td&gt;
&lt;td&gt;0.30&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;01:39.01&lt;/td&gt;
&lt;td&gt;01:59.26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;12.9&lt;/td&gt;
&lt;td&gt;13.5&lt;/td&gt;
&lt;td&gt;13.7&lt;/td&gt;
&lt;td&gt;0.47&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;02:32.12&lt;/td&gt;
&lt;td&gt;02:54.62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16.0&lt;/td&gt;
&lt;td&gt;9.8&lt;/td&gt;
&lt;td&gt;10.1&lt;/td&gt;
&lt;td&gt;10.3&lt;/td&gt;
&lt;td&gt;0.64&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;03:26.81&lt;/td&gt;
&lt;td&gt;04:41.07&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19.0&lt;/td&gt;
&lt;td&gt;9.1&lt;/td&gt;
&lt;td&gt;9.5&lt;/td&gt;
&lt;td&gt;9.6&lt;/td&gt;
&lt;td&gt;0.62&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;03:43.55&lt;/td&gt;
&lt;td&gt;05:23.83&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32.0&lt;/td&gt;
&lt;td&gt;8.0&lt;/td&gt;
&lt;td&gt;9.3&lt;/td&gt;
&lt;td&gt;9.3&lt;/td&gt;
&lt;td&gt;84.05&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;04:08.64&lt;/td&gt;
&lt;td&gt;06:15.66&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64.0&lt;/td&gt;
&lt;td&gt;7.1&lt;/td&gt;
&lt;td&gt;9.0&lt;/td&gt;
&lt;td&gt;9.3&lt;/td&gt;
&lt;td&gt;321.96&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;04:39.51&lt;/td&gt;
&lt;td&gt;07:30.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128.0&lt;/td&gt;
&lt;td&gt;6.5&lt;/td&gt;
&lt;td&gt;8.9&lt;/td&gt;
&lt;td&gt;9.2&lt;/td&gt;
&lt;td&gt;858.27&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;05:06.60&lt;/td&gt;
&lt;td&gt;07:28.05&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256.0&lt;/td&gt;
&lt;td&gt;6.3&lt;/td&gt;
&lt;td&gt;8.8&lt;/td&gt;
&lt;td&gt;9.4&lt;/td&gt;
&lt;td&gt;1534.79&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;04:12.89&lt;/td&gt;
&lt;td&gt;07:52.97&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;/p&gt;

&lt;h3&gt;
  
  
  Duration
&lt;/h3&gt;

&lt;p&gt;The first cycle lasted less than a minute (39 seconds) while the last cycle took 8 min and 51 seconds to complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy7ss31xrrzhm0nt0je2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy7ss31xrrzhm0nt0je2.png" alt="Duration for the high parallel requests" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Requests Average Duration
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frujoyfsbm4dfque61xei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frujoyfsbm4dfque61xei.png" alt="Duration for the less parallel requests" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the initial cycles, the average duration of each request grows slowly but noticeably. While the single request took only 39s to complete, requests that used all available cores (cycle 19) took, on average, 03 minutes and 43 seconds to complete.&lt;/p&gt;

&lt;p&gt;
  Table Data
  &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parallel&lt;/th&gt;
&lt;th&gt;Min&lt;/th&gt;
&lt;th&gt;Avg&lt;/th&gt;
&lt;th&gt;P99&lt;/th&gt;
&lt;th&gt;Max&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;00:39.33&lt;/td&gt;
&lt;td&gt;00:39.33&lt;/td&gt;
&lt;td&gt;00:39.33&lt;/td&gt;
&lt;td&gt;00:39.33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;01:11.92&lt;/td&gt;
&lt;td&gt;01:12.09&lt;/td&gt;
&lt;td&gt;01:12.27&lt;/td&gt;
&lt;td&gt;01:12.27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;01:19.14&lt;/td&gt;
&lt;td&gt;01:34.31&lt;/td&gt;
&lt;td&gt;01:47.17&lt;/td&gt;
&lt;td&gt;01:47.17&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;01:24.99&lt;/td&gt;
&lt;td&gt;01:39.01&lt;/td&gt;
&lt;td&gt;01:59.26&lt;/td&gt;
&lt;td&gt;01:59.26&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;02:06.97&lt;/td&gt;
&lt;td&gt;02:32.12&lt;/td&gt;
&lt;td&gt;02:54.62&lt;/td&gt;
&lt;td&gt;02:54.62&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;02:01.12&lt;/td&gt;
&lt;td&gt;03:26.81&lt;/td&gt;
&lt;td&gt;04:41.07&lt;/td&gt;
&lt;td&gt;04:41.07&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;02:14.66&lt;/td&gt;
&lt;td&gt;03:43.55&lt;/td&gt;
&lt;td&gt;05:23.83&lt;/td&gt;
&lt;td&gt;05:23.83&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;02:41.05&lt;/td&gt;
&lt;td&gt;04:08.64&lt;/td&gt;
&lt;td&gt;06:15.66&lt;/td&gt;
&lt;td&gt;06:15.66&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;02:17.05&lt;/td&gt;
&lt;td&gt;04:39.51&lt;/td&gt;
&lt;td&gt;07:30.13&lt;/td&gt;
&lt;td&gt;07:30.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;02:35.36&lt;/td&gt;
&lt;td&gt;05:06.60&lt;/td&gt;
&lt;td&gt;07:28.05&lt;/td&gt;
&lt;td&gt;08:38.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;00:00.00&lt;/td&gt;
&lt;td&gt;04:12.89&lt;/td&gt;
&lt;td&gt;07:52.97&lt;/td&gt;
&lt;td&gt;08:51.75&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;/p&gt;

&lt;h3&gt;
  
  
  Combined Metrics (Tokens/s x Duration x Wait Time)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnxpurwtgt8xti8tgbkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnxpurwtgt8xti8tgbkq.png" alt="Combined Metrics" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait time grows linearly after 19 parallel requests, making it unusable for interactive applications. &lt;/p&gt;

&lt;p&gt;It also shows that throughput drops exponentially before stabilizing at much lower cycles.&lt;/p&gt;

&lt;p&gt;Let's zoom in into smaller cycles:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qw9rxd76ju4udde99ij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2qw9rxd76ju4udde99ij.png" alt="Combined Metrics 1-32" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the chart is different: the wait time is stable at &amp;lt;1s until cycle 19 and it's clear to see the connection between throughput and p99 request duration.&lt;/p&gt;

&lt;h3&gt;
  
  
  GPU Usage
&lt;/h3&gt;

&lt;p&gt;Thanks to powermetrics, it's possible to get GPU usage metrics in MacOS!&lt;/p&gt;

&lt;p&gt;The Macbook M2 19-GPU proved to be pretty constant, with small variations in the GPU Frequency, stable at 1397MHz, and Power Usage, stable at ~20.3W.&lt;/p&gt;

&lt;p&gt;The concurrency level didn't seem to affect the GPU metrics.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frossby0s3ro5s8qpcyhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frossby0s3ro5s8qpcyhe.png" alt="GPU Usage" width="800" height="538"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo powermetrics --samplers gpu_power -n1 -i1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  RAM/CPU/Threads Usage
&lt;/h3&gt;

&lt;p&gt;To analyze how much computer resource the Ollama + DeepSeek was consuming, I tracked the ollama processes (with pgrep, lsof and ps) and monitored the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CPU Usage (%)&lt;/li&gt;
&lt;li&gt;Memory Usage (%)&lt;/li&gt;
&lt;li&gt;Resident Memory (MB)&lt;/li&gt;
&lt;li&gt;Thread Count (int)&lt;/li&gt;
&lt;li&gt;File Descriptors (int)&lt;/li&gt;
&lt;li&gt;Virtual Memory Size (MB)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ok, when you have &lt;em&gt;ollama serve&lt;/em&gt; running idle, it uses a single process.&lt;/p&gt;

&lt;p&gt;When there is 1 or 256 active requests, ollama uses 2 processes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw3c9f6g1hs1ika19x58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw3c9f6g1hs1ika19x58.png" alt="Process Monitor" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By analyzing the chart, we can see that they behave different from each other.&lt;/p&gt;

&lt;p&gt;While one of them has high memory/cpu usage and low thread count/open file descriptors, the other one has the opposite: low cpu/memory usage with linear growing open FDs and high thread count.&lt;/p&gt;

&lt;p&gt;If I had to guess, I would say that the green process may be responsible for the Web Server while the red one for the DeepSeek R1 LLM generations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Web Server Process
&lt;/h4&gt;

&lt;p&gt;While the open File Descriptors grow linearly as the number of concurrent requests grows, the Thread count has a steeper pattern upward.&lt;/p&gt;

&lt;p&gt;Notably, the CPU and memory usage of this process remain constantly low, between 0.2~0.6% for CPU and 82.6 -&amp;gt; 114.1MB for RAM.&lt;/p&gt;

&lt;p&gt;
  Table Data
  &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concurrency&lt;/th&gt;
&lt;th&gt;Avg CPU%&lt;/th&gt;
&lt;th&gt;Max CPU%&lt;/th&gt;
&lt;th&gt;Avg Mem%&lt;/th&gt;
&lt;th&gt;Max Mem%&lt;/th&gt;
&lt;th&gt;Avg Threads&lt;/th&gt;
&lt;th&gt;Avg FDs&lt;/th&gt;
&lt;th&gt;Avg RAM(MB)&lt;/th&gt;
&lt;th&gt;Max RAM(MB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;22.8&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;17.0&lt;/td&gt;
&lt;td&gt;18.9&lt;/td&gt;
&lt;td&gt;36.4&lt;/td&gt;
&lt;td&gt;82.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;0.0&lt;/td&gt;
&lt;td&gt;0.8&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;18.0&lt;/td&gt;
&lt;td&gt;21.0&lt;/td&gt;
&lt;td&gt;83.4&lt;/td&gt;
&lt;td&gt;83.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;1.7&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;20.0&lt;/td&gt;
&lt;td&gt;31.7&lt;/td&gt;
&lt;td&gt;90.4&lt;/td&gt;
&lt;td&gt;90.9&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;1.5&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;21.0&lt;/td&gt;
&lt;td&gt;42.5&lt;/td&gt;
&lt;td&gt;79.4&lt;/td&gt;
&lt;td&gt;95.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;2.2&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;21.0&lt;/td&gt;
&lt;td&gt;44.9&lt;/td&gt;
&lt;td&gt;82.0&lt;/td&gt;
&lt;td&gt;82.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;2.2&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;0.5&lt;/td&gt;
&lt;td&gt;21.0&lt;/td&gt;
&lt;td&gt;48.9&lt;/td&gt;
&lt;td&gt;85.6&lt;/td&gt;
&lt;td&gt;86.2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;2.9&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;21.0&lt;/td&gt;
&lt;td&gt;69.1&lt;/td&gt;
&lt;td&gt;92.1&lt;/td&gt;
&lt;td&gt;94.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;2.7&lt;/td&gt;
&lt;td&gt;0.6&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;37.0&lt;/td&gt;
&lt;td&gt;102.3&lt;/td&gt;
&lt;td&gt;103.8&lt;/td&gt;
&lt;td&gt;108.4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;0.2&lt;/td&gt;
&lt;td&gt;4.2&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;0.7&lt;/td&gt;
&lt;td&gt;72.0&lt;/td&gt;
&lt;td&gt;145.4&lt;/td&gt;
&lt;td&gt;108.2&lt;/td&gt;
&lt;td&gt;114.1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;/p&gt;

&lt;h4&gt;
  
  
  DeepSeek Process
&lt;/h4&gt;

&lt;p&gt;If the Open FDs and Threads Count snitched the Web Server process, the Memory Consumption and maximum Threads Count of 18 snitched the DeepSeek Process.&lt;/p&gt;

&lt;p&gt;The fact that the number of threads doesn't outgrow the number of GPU Cores, even when under higher concurrency cycle, indicates that this process might be the one in charge of DeepSeek, that uses the GPU, that has 19 cores. &lt;/p&gt;

&lt;p&gt;The average RAM Usage of this process is remarkable: from 2.2 to &lt;em&gt;2.3GB&lt;/em&gt;, representing 13.7 to 14.6% of all available RAM, CPU usage is also high for a process, consuming 5.7% for a single request and 13.1% for 256.&lt;/p&gt;

&lt;p&gt;
  Table Data
  &lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concurrency&lt;/th&gt;
&lt;th&gt;Avg CPU%&lt;/th&gt;
&lt;th&gt;Max CPU%&lt;/th&gt;
&lt;th&gt;Avg Mem%&lt;/th&gt;
&lt;th&gt;Max Mem%&lt;/th&gt;
&lt;th&gt;Avg Threads&lt;/th&gt;
&lt;th&gt;Avg FDs&lt;/th&gt;
&lt;th&gt;Avg RAM(MB)&lt;/th&gt;
&lt;th&gt;Max RAM(MB)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4.6&lt;/td&gt;
&lt;td&gt;5.7&lt;/td&gt;
&lt;td&gt;13.7&lt;/td&gt;
&lt;td&gt;13.7&lt;/td&gt;
&lt;td&gt;12.1&lt;/td&gt;
&lt;td&gt;22.0&lt;/td&gt;
&lt;td&gt;2248.1&lt;/td&gt;
&lt;td&gt;2250.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;4.2&lt;/td&gt;
&lt;td&gt;5.2&lt;/td&gt;
&lt;td&gt;13.8&lt;/td&gt;
&lt;td&gt;13.8&lt;/td&gt;
&lt;td&gt;16.0&lt;/td&gt;
&lt;td&gt;23.0&lt;/td&gt;
&lt;td&gt;2259.9&lt;/td&gt;
&lt;td&gt;2263.1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;4.9&lt;/td&gt;
&lt;td&gt;7.9&lt;/td&gt;
&lt;td&gt;14.0&lt;/td&gt;
&lt;td&gt;14.1&lt;/td&gt;
&lt;td&gt;16.0&lt;/td&gt;
&lt;td&gt;28.5&lt;/td&gt;
&lt;td&gt;2299.6&lt;/td&gt;
&lt;td&gt;2303.3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;5.7&lt;/td&gt;
&lt;td&gt;11.6&lt;/td&gt;
&lt;td&gt;14.0&lt;/td&gt;
&lt;td&gt;14.3&lt;/td&gt;
&lt;td&gt;16.4&lt;/td&gt;
&lt;td&gt;34.1&lt;/td&gt;
&lt;td&gt;2299.1&lt;/td&gt;
&lt;td&gt;2335.5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;19&lt;/td&gt;
&lt;td&gt;5.8&lt;/td&gt;
&lt;td&gt;14.3&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;17.0&lt;/td&gt;
&lt;td&gt;35.3&lt;/td&gt;
&lt;td&gt;2327.4&lt;/td&gt;
&lt;td&gt;2330.6&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;td&gt;5.0&lt;/td&gt;
&lt;td&gt;12.0&lt;/td&gt;
&lt;td&gt;14.4&lt;/td&gt;
&lt;td&gt;14.4&lt;/td&gt;
&lt;td&gt;17.0&lt;/td&gt;
&lt;td&gt;34.8&lt;/td&gt;
&lt;td&gt;2359.1&lt;/td&gt;
&lt;td&gt;2366.3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;64&lt;/td&gt;
&lt;td&gt;5.5&lt;/td&gt;
&lt;td&gt;14.2&lt;/td&gt;
&lt;td&gt;14.5&lt;/td&gt;
&lt;td&gt;14.6&lt;/td&gt;
&lt;td&gt;17.1&lt;/td&gt;
&lt;td&gt;37.7&lt;/td&gt;
&lt;td&gt;2374.2&lt;/td&gt;
&lt;td&gt;2390.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;128&lt;/td&gt;
&lt;td&gt;5.5&lt;/td&gt;
&lt;td&gt;13.2&lt;/td&gt;
&lt;td&gt;14.4&lt;/td&gt;
&lt;td&gt;14.6&lt;/td&gt;
&lt;td&gt;18.0&lt;/td&gt;
&lt;td&gt;38.8&lt;/td&gt;
&lt;td&gt;2366.5&lt;/td&gt;
&lt;td&gt;2397.7&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256&lt;/td&gt;
&lt;td&gt;5.5&lt;/td&gt;
&lt;td&gt;13.1&lt;/td&gt;
&lt;td&gt;14.4&lt;/td&gt;
&lt;td&gt;14.6&lt;/td&gt;
&lt;td&gt;18.0&lt;/td&gt;
&lt;td&gt;39.5&lt;/td&gt;
&lt;td&gt;2364.1&lt;/td&gt;
&lt;td&gt;2385.5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;



&lt;/p&gt;

&lt;h2&gt;
  
  
  Summarized Results
&lt;/h2&gt;

&lt;p&gt;These results were generated running Ollama + DeepSeek on a Macbook M2 Pro, 16GB of RAM and 19-Core GPU, it will probably be different in a different setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  How many tokens/s can I get running DeepSeek R1 Gwen 7B locally with &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;ollama&lt;/a&gt;?
&lt;/h3&gt;

&lt;p&gt;For a single request: 53.1 tokens/s.&lt;/p&gt;

&lt;p&gt;For 19 parallel requests -&amp;gt; 9.1 tokens/s.&lt;/p&gt;

&lt;p&gt;For 256 concurrent requests -&amp;gt; 6.3 tokens/s. &lt;/p&gt;

&lt;p&gt;You can check the Table Data if you want.&lt;/p&gt;

&lt;h3&gt;
  
  
  How many parallel requests can I serve with reasonable throughput?
&lt;/h3&gt;

&lt;p&gt;Assuming 19.9 tokens/s as a reasonable throughput, this machine can serve up to 4 requests in parallel.&lt;/p&gt;

&lt;p&gt;This may be enough for single-person daily routine tasks but is definitely not enough to run a commercial API server.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is a reasonable throughput?
&lt;/h3&gt;

&lt;p&gt;During the course of writting this article, I created a CLI to showcase different token/s speeds, you can check it out here.&lt;/p&gt;

&lt;h3&gt;
  
  
  How does the number of concurrent requests impact the performance?
&lt;/h3&gt;

&lt;p&gt;A lot! &lt;/p&gt;

&lt;p&gt;For more than 19 concurrent requests, the wait time becomes unbearable, and for more than 5 parallel requests, the response speed is too low.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq9hut1jv620e6m36pqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feq9hut1jv620e6m36pqu.png" alt="Throughput vs Average Wait Time" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table Data&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Considering this is the 7b version model, and it can reach up to 55 tokens/s when serving a single request, I would say it is fast and good enough to chat interactively during daily tasks, with a reasonably low power usage (same as a led lamp).&lt;/p&gt;

&lt;p&gt;I mean, the quality is far from great when compared to 671b version (which is the model that beats OpenAI models), but I believe this is just the beginning. &lt;/p&gt;

&lt;p&gt;Quantization strategies will become more effective, and soon we'll be able to cherry-pick subjects to train smaller models, which we demonstrated to be possible to run on a &lt;em&gt;developer's&lt;/em&gt; computer. It is &lt;strong&gt;possible&lt;/strong&gt;, we're in the Tech Industry.&lt;/p&gt;

&lt;p&gt;It happened with the processor, the disk and the memory, it's a matter of time until it happens with AI inference chips and LLM models.&lt;/p&gt;

&lt;p&gt;That's it for today, thank you for the reading 😁✌️!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>go</category>
      <category>deepseek</category>
      <category>benchmark</category>
    </item>
    <item>
      <title>The Evolution of React State Management: From Local to Async</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Tue, 20 Aug 2024 20:37:43 +0000</pubDate>
      <link>https://dev.to/ocodista/the-evolution-of-react-state-management-from-local-to-async-30g9</link>
      <guid>https://dev.to/ocodista/the-evolution-of-react-state-management-from-local-to-async-30g9</guid>
      <description>&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
Introduction
&lt;/li&gt;
&lt;li&gt;
Local State

&lt;ul&gt;
&lt;li&gt;
Class Components
&lt;/li&gt;
&lt;li&gt;
Functional Components
&lt;/li&gt;
&lt;li&gt;
useReducer Hook
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Global State

&lt;ul&gt;
&lt;li&gt;
What is Global State?
&lt;/li&gt;
&lt;li&gt;How to Use It?&lt;/li&gt;
&lt;li&gt;
The Main Way
&lt;/li&gt;
&lt;li&gt;The Simple Way&lt;/li&gt;
&lt;li&gt;The Wrong Way&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Async State
&lt;/li&gt;

&lt;li&gt;

Conclusion &lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;This article presents an overview of how &lt;em&gt;State&lt;/em&gt; was managed in React Applications thousands of years ago when Class Components dominated the world and &lt;em&gt;functional components&lt;/em&gt; were just a bold idea, until recent times, when a new paradigm of &lt;em&gt;State&lt;/em&gt; has emerged: &lt;strong&gt;Async State&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Local State
&lt;/h2&gt;

&lt;p&gt;Alright, everyone who has already worked with React knows what a Local State is.&lt;/p&gt;

&lt;p&gt;
  I don't know what it is
  &lt;br&gt;
Local &lt;a href="https://react.dev/learn/state-a-components-memory" rel="noopener noreferrer"&gt;State&lt;/a&gt; is the state of a single Component. 

&lt;p&gt;Every time a state is updated, the component re-renders.&lt;br&gt;
&lt;/p&gt;

&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;You may have worked with this ancient structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CommitList&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Component&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
      &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;componentDidMount&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fetchCommits&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;fetchCommits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.github.com/repos/facebook/react/commits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Commit&lt;/span&gt; &lt;span class="nx"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h2&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ul&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;li&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/li&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;          &lt;span class="p"&gt;))}&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ul&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;Component&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Total&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Perhaps a &lt;em&gt;modern&lt;/em&gt; &lt;strong&gt;functional&lt;/strong&gt; one:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CommitList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCommits&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// To update state you can use setIsLoading, setCommits or setUsername.&lt;/span&gt;
  &lt;span class="c1"&gt;// As each function will overwrite only the state bound to it.&lt;/span&gt;
  &lt;span class="c1"&gt;// NOTE: It will still cause a full-component re-render&lt;/span&gt;
  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fetchCommits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.github.com/repos/facebook/react/commits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="nf"&gt;setCommits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;setIsLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;

    &lt;span class="nf"&gt;fetchCommits&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="na"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Commit&lt;/span&gt; &lt;span class="nx"&gt;List&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h2&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ul&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;li&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/li&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="p"&gt;))}&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ul&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Total&lt;/span&gt; &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Or even a &lt;em&gt;"more accepted"&lt;/em&gt; one? (Definitely more rare though)&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;initialState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
  &lt;span class="na"&gt;userName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reducer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_LOADING&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_COMMITS&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_USERNAME&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;userName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="nl"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;CommitList&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useReducer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;reducer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;initialState&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userName&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// To update state, use dispatch. For example:&lt;/span&gt;
  &lt;span class="c1"&gt;// dispatch({ type: 'SET_LOADING', payload: true });&lt;/span&gt;
  &lt;span class="c1"&gt;// dispatch({ type: 'SET_COMMITS', payload: [...] });&lt;/span&gt;
  &lt;span class="c1"&gt;// dispatch({ type: 'SET_USERNAME', payload: 'newUsername' });&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Which can make you wonder... &lt;/p&gt;

&lt;p&gt;Why the &lt;em&gt;hack&lt;/em&gt; would I be writing this complex reducer for a single component?&lt;/p&gt;

&lt;p&gt;Well, React inherited this &lt;em&gt;ugly&lt;/em&gt; hook called &lt;code&gt;useReducer&lt;/code&gt; from a very important tool called &lt;strong&gt;Redux&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you ever had to deal with &lt;em&gt;Global State Management&lt;/em&gt; in React, you must've heard about &lt;strong&gt;Redux&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This brings us to the next topic: Global State Management.&lt;/p&gt;
&lt;h2&gt;
  
  
  Global State
&lt;/h2&gt;

&lt;p&gt;Global State Management is one of the first complex subjects when learning React.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is it?
&lt;/h3&gt;

&lt;p&gt;It can be multiple things, built in many ways, with different libraries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I like to define it as:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A single JSON object, accessed and maintained by any Component of the application.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;globalState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
  &lt;span class="na"&gt;isUnique&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;isAccessible&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;isModifiable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;isFEOnly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;I like to think of it as:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Front-End &lt;em&gt;No-SQL&lt;/em&gt; Database.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That's right, a Database. It's where you store application data, that your components can read/write/update/delete. &lt;/p&gt;

&lt;p&gt;I know, by default, the state will be recreated whenever the user reloads the page, but that may not be what you want it to do, and if you're persisting data somewhere (like the localStorage), you might want to learn about &lt;code&gt;migrations&lt;/code&gt; to avoid breaking the app every new deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I like to use it as:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A multidimensional portal, where components can &lt;em&gt;dispatch&lt;/em&gt; their feelings and &lt;em&gt;select&lt;/em&gt; their attributes. Everything, everywhere, all at once.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  How to use it?
&lt;/h3&gt;
&lt;h4&gt;
  
  
  The main way
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://redux.js.org/" rel="noopener noreferrer"&gt;Redux&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is the industry standard.&lt;/p&gt;

&lt;p&gt;I have worked with React, TypeScript, and &lt;em&gt;Redux&lt;/em&gt; for 7 years. Every project I've worked with &lt;em&gt;professionally&lt;/em&gt; uses &lt;em&gt;Redux&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;The vast majority of people I've met who works with React, use &lt;em&gt;Redux&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;The most mentioned tool in React open positions at &lt;a href="https://trampardecasa.com.br" rel="noopener noreferrer"&gt;Trampar de Casa&lt;/a&gt; is &lt;em&gt;Redux&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The most popular React State Management tool is...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee30bf15r2la6tc9xvak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fee30bf15r2la6tc9xvak.png" alt="Tambor" width="412" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redux&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbu9b8bf5t5a6t89w3b5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbu9b8bf5t5a6t89w3b5.png" alt="Github Stars of React State Management Tools" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to work with React, you should learn &lt;em&gt;Redux&lt;/em&gt;. &lt;br&gt;
If you currently work with &lt;em&gt;React&lt;/em&gt;, you probably already know.&lt;/p&gt;

&lt;p&gt;Ok, here's how we usually fetch data using &lt;em&gt;Redux&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;
  Disclaimer
  &lt;br&gt;
"What? Does this make sense? Redux is to store data, not to fetch, how the F would you fetch data with Redux?"

&lt;p&gt;If you thought about this, I must tell you:&lt;/p&gt;

&lt;p&gt;I'm not actually &lt;em&gt;fetching&lt;/em&gt; data with Redux. &lt;br&gt;
Redux will be the cabinet for the application, it'll store ~shoes~ states that are directly related to &lt;em&gt;fetching&lt;/em&gt;, that's why I used this wrong phrase: "fetch data using Redux".&lt;br&gt;
&lt;/p&gt;

&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// actions&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SET_LOADING&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_LOADING&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;setLoading&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SET_LOADING&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SET_ERROR&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_ERROR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;setError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SET_ERROR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;SET_COMMITS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;SET_COMMITS&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;setCommits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;SET_COMMITS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;


&lt;span class="c1"&gt;// To be able to use ASYNC action, it's required to use redux-thunk as a middleware&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fetchCommits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.github.com/repos/facebook/react/commits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;setCommits&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// the state shared between 2-to-many components&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;initialState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// reducer&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rootReducer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;initialState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// This could also be actions[action.type].&lt;/span&gt;
  &lt;span class="k"&gt;switch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;type&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="na"&gt;SET_LOADING&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="na"&gt;SET_ERROR&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;case&lt;/span&gt; &lt;span class="na"&gt;SET_COMMITS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;action&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="nl"&gt;default&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now on the UI side, we integrate with actions using &lt;strong&gt;useDispatch&lt;/strong&gt; and &lt;strong&gt;useSelector&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Commits.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useDispatch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useSelector&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-redux&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;fetchCommits&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./action&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Commits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dispatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useDispatch&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;fetchCommits&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nb"&gt;Error&lt;/span&gt; &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="nx"&gt;trying&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="nx"&gt;fetch&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ul&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;li&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/li&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;))}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ul&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If &lt;code&gt;Commits.tsx&lt;/code&gt; was the only component that needed to access &lt;code&gt;commits&lt;/code&gt; list, you shouldn't store this data on the Global State. It could use the local state instead.&lt;/p&gt;

&lt;p&gt;But let's suppose you have other components that need to interact with this list, one of them may be as simple as this one:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// TotalCommitsCount.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useSelector&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-redux&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;commitCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSelector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Total&lt;/span&gt; &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commitCount&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;
  Disclaimer
  &lt;br&gt;
In theory, this piece of code would make more sense living inside &lt;code&gt;Commits.tsx&lt;/code&gt;, but let's assume we want to display this component in multiple places of the app and it makes sense to put the &lt;code&gt;commits&lt;/code&gt; list on the Global State and to have this &lt;code&gt;TotalCommitsCount&lt;/code&gt; component.&lt;br&gt;


&lt;/p&gt;

&lt;p&gt;With the index.js component being something like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;ReactDOM&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-dom&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;thunk&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;redux-thunk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;createStore&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;applyMiddleware&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;redux&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Provider&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-redux&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Commits&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./Commits&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./TotalCommitsCount&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Commits&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/main&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;store&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;createStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rootReducer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;applyMiddleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;thunk&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nx"&gt;ReactDOM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Provider&lt;/span&gt; &lt;span class="nx"&gt;store&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;store&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Provider&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;,
&lt;/span&gt;  &lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;root&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This works, but man, that looks overly complicated for something as simple as fetching data right? &lt;/p&gt;

&lt;p&gt;Redux feels a little too bloated to me. &lt;/p&gt;

&lt;p&gt;You're forced to create actions and reducers, often also need to create a string name for the action to be used inside the reducer, and depending on the folder structure of the project, each layer could be in a different file.&lt;/p&gt;

&lt;p&gt;Which is not productive.&lt;/p&gt;

&lt;p&gt;But wait, there is a simpler way.&lt;/p&gt;
&lt;h4&gt;
  
  
  The simple way
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://zustand-demo.pmnd.rs/" rel="noopener noreferrer"&gt;Zustand&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the time I'm writing this article, Zustand has 3,495,826 million weekly downloads, more than 45,000 stars on GitHub, and 2, that's right, &lt;strong&gt;TWO&lt;/strong&gt; open Pull Requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ONE OF THEM IS ABOUT UPDATING IT'S DOC&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhex2iy3z3jckw8qw8y0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhex2iy3z3jckw8qw8y0f.png" alt="Zustand Open Issues" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If this is not a piece of Software Programming art, I don't know what it is.&lt;/p&gt;

&lt;p&gt;Here's how to replicate the previous code using Zustand.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// store.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;create&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zustand&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;useStore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt;
  &lt;span class="na"&gt;fetchCommits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.github.com/repos/facebook/react/commits&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This was our Store, now the UI.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Commits.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;useStore&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./store&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Commits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;fetchCommits&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useStore&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;fetchCommits&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;fetchCommits&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLoading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nb"&gt;Error&lt;/span&gt; &lt;span class="nx"&gt;occurred&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ul&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;li&lt;/span&gt; &lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sha&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/li&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="p"&gt;))}&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ul&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;And last but not least.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// TotalCommitsCount.tsx&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;useStore&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./store&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; 
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TotalCommitsCount&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; 
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;totalCommits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;state&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;commits&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt; 
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 
            &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h2&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Total&lt;/span&gt; &lt;span class="na"&gt;Commits&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h2&amp;gt; &amp;lt;p&amp;gt;{totalCommits}&amp;lt;/&lt;/span&gt;&lt;span class="nx"&gt;p&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&amp;gt;&lt;/span&gt;&lt;span class="err"&gt; 
&lt;/span&gt;    &lt;span class="p"&gt;);&lt;/span&gt; 
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;There are no actions and reducers, there is a &lt;code&gt;Store&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;And it's advisable to have &lt;code&gt;slices&lt;/code&gt; of &lt;code&gt;Store&lt;/code&gt;, so everything is near to the &lt;strong&gt;feature&lt;/strong&gt; related to the data.&lt;/p&gt;

&lt;p&gt;It works perfect with a &lt;code&gt;folder-by-feature&lt;/code&gt; folder structure.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsec16s508vwltbraolkh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsec16s508vwltbraolkh.png" alt="Chef Kiss Emoji" width="680" height="626"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  The wrong way
&lt;/h4&gt;

&lt;p&gt;I need to confess something, both of my previous examples are wrong.&lt;/p&gt;

&lt;p&gt;And let me do a quick disclaimer: They're not &lt;strong&gt;wrong&lt;/strong&gt;, they're outdated, and therefore, &lt;strong&gt;wrong&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This wasn't always wrong though. That's how we used to develop data fetching in React applications a while ago, and you may still find code similar to this one out there in the world.&lt;/p&gt;

&lt;p&gt;But there is another way.&lt;/p&gt;

&lt;p&gt;An easier one, and more aligned with an essential feature for web development: &lt;strong&gt;Caching&lt;/strong&gt;. But I'll get back to this subject later.&lt;/p&gt;

&lt;p&gt;Currently, to fetch data in a single component, the following flow is required:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnplp6n4jipsu77kvpwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnplp6n4jipsu77kvpwz.png" alt="Fetching data flow" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What happens if I need to fetch data from 20 endpoints inside 20 components?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;20x isLoading + 20x isError + 20x actions to mutate this properties.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What will they look like?&lt;/p&gt;

&lt;p&gt;With 20 endpoints, this will become a &lt;strong&gt;very&lt;/strong&gt; repetitive process and will cause a good amount of duplicated code.&lt;/p&gt;

&lt;p&gt;What if you need to implement a caching feature to prevent recalling the same endpoint in a short period? (or any other condition)&lt;/p&gt;

&lt;p&gt;Well, that will translate into &lt;strong&gt;a lot of work&lt;/strong&gt; for &lt;strong&gt;basic&lt;/strong&gt; features (like caching) and well-written components that are prepared for loading/error states.&lt;/p&gt;

&lt;p&gt;This is why &lt;strong&gt;Async State&lt;/strong&gt; was born.&lt;/p&gt;
&lt;h2&gt;
  
  
  Async State
&lt;/h2&gt;

&lt;p&gt;Before talking about Async State I want to mention something. We know &lt;strong&gt;how&lt;/strong&gt; to use Local and Global state but at this time I didn't mention &lt;strong&gt;what&lt;/strong&gt; should be stored and &lt;strong&gt;why&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Global State&lt;/em&gt; example has a flaw and an important one.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;TotalCommitsCount&lt;/code&gt; component will always display the Commits Count, even if it's loading or has an error.&lt;/p&gt;

&lt;p&gt;If the request failed, there's no way to know that the &lt;em&gt;Total Commits Count&lt;/em&gt; is 0, so presenting this value is presenting &lt;strong&gt;a lie&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In fact, until the request finishes, there is no way to know for sure what's the &lt;em&gt;Total Commits Count&lt;/em&gt; value.&lt;/p&gt;

&lt;p&gt;This is because the &lt;em&gt;Total Commits Count&lt;/em&gt; is not a value we have &lt;strong&gt;inside&lt;/strong&gt; the application. It's &lt;strong&gt;external&lt;/strong&gt; information, &lt;em&gt;async&lt;/em&gt; stuff, you know.&lt;/p&gt;

&lt;p&gt;We shouldn't be telling lies if we don't know the truth.&lt;/p&gt;

&lt;p&gt;That's why we must identify &lt;em&gt;Async State&lt;/em&gt; in our application and create components prepared for it.&lt;/p&gt;

&lt;p&gt;We can do this with &lt;a href="https://tanstack.com/query/latest" rel="noopener noreferrer"&gt;React-Query&lt;/a&gt;, &lt;a href="https://swr.vercel.app/" rel="noopener noreferrer"&gt;SWR&lt;/a&gt;, &lt;a href="https://redux-toolkit.js.org/rtk-query/overview" rel="noopener noreferrer"&gt;Redux Toolkit Query&lt;/a&gt; and many others.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkrk71q5jfrlm160as08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkrk71q5jfrlm160as08.png" alt="Github Stars of React Async State Tools" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this article, I'll use React-Query. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;I recommend you to access the docs of each of these tools to better understand which problems they solve.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's the code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;No more actions, no more dispatches, no more &lt;strong&gt;Global State&lt;/strong&gt; for fetching data.&lt;/p&gt;

&lt;p&gt;This is what you have to do in your &lt;code&gt;App.tsx&lt;/code&gt; file to have React-Query properly configured:&lt;/p&gt;

&lt;p&gt;You see, &lt;strong&gt;Async State&lt;/strong&gt; is special. &lt;/p&gt;

&lt;p&gt;It's like Schrödinger's cat – you don't know the state until you observe it (or run it).&lt;/p&gt;

&lt;p&gt;But wait, if both components are calling &lt;code&gt;useCommits&lt;/code&gt; and &lt;code&gt;useCommits&lt;/code&gt; is calling an &lt;code&gt;API endpoint&lt;/code&gt;, does this mean that there will be TWO identical requests to load the same data?&lt;/p&gt;

&lt;p&gt;Short Answer: &lt;strong&gt;no!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Long Answer: React Query is awesome. It automatically handles this situation for you, it comes with pre-configured caching that is smart enough to know when to &lt;em&gt;refetch&lt;/em&gt; your data or simply use the cache.&lt;/p&gt;

&lt;p&gt;It's also extremely configurable so you can tweak to fit 100% of your application's needs.&lt;/p&gt;

&lt;p&gt;Now we have our components always ready for &lt;code&gt;isLoading&lt;/code&gt; or &lt;code&gt;isError&lt;/code&gt; and we keep the &lt;code&gt;Global State&lt;/code&gt; less polluted and have some pretty neat features out-of-the-box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Now you know the difference between &lt;em&gt;Local&lt;/em&gt;, &lt;em&gt;Global&lt;/em&gt; and &lt;em&gt;Async State&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Local   -&amp;gt; Component Only.&lt;br&gt;
Global -&amp;gt; Single-Json-NoSQL-DB-For-The-FE.&lt;br&gt;
Async -&amp;gt; External data, Schrodinger's cat-like, living outside of the FE application that requires &lt;code&gt;Loading&lt;/code&gt; and can return &lt;code&gt;Error&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, let me know if you have different opinions or any constructive feedback, cheers!&lt;/p&gt;

</description>
      <category>react</category>
      <category>redux</category>
      <category>webdev</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Node vs Go: API Showdown</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Mon, 01 Jan 2024 14:15:57 +0000</pubDate>
      <link>https://dev.to/ocodista/node-vs-go-api-showdown-4njl</link>
      <guid>https://dev.to/ocodista/node-vs-go-api-showdown-4njl</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Disclaimer and Introduction&lt;/li&gt;
&lt;li&gt;
How were the metrics gathered?

&lt;ul&gt;
&lt;li&gt;Tech Stack&lt;/li&gt;
&lt;li&gt;Flow Diagram&lt;/li&gt;
&lt;li&gt;Manual RDS Setup&lt;/li&gt;
&lt;li&gt;Environment Initialization&lt;/li&gt;
&lt;li&gt;Monitoring Process&lt;/li&gt;
&lt;li&gt;Script Parameters&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

2,000 Requests per Second

&lt;ul&gt;
&lt;li&gt;Latency x Seconds&lt;/li&gt;
&lt;li&gt;File Descriptors Count&lt;/li&gt;
&lt;li&gt;Threads Count&lt;/li&gt;
&lt;li&gt;RAM&lt;/li&gt;
&lt;li&gt;CPU&lt;/li&gt;
&lt;li&gt;Overall&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

3,000 Requests per Second

&lt;ul&gt;
&lt;li&gt;Dev.to can't link to same-name sections 😔&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

5,000 Requests per Second

&lt;ul&gt;
&lt;li&gt;But they exist, and they're awesome!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Final Considerations&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disclaimer and Introduction
&lt;/h2&gt;

&lt;p&gt;This blog is primarily for fun and educational exploration. The results here should not be the sole basis of your technical decisions. It does not mean that one language is better than the other, please do not read it so seriously.&lt;/p&gt;

&lt;p&gt;In fact, it does not make much sense to compare such different languages.&lt;/p&gt;

&lt;p&gt;Cool, with that being said, let's have some fun, compare some metrics, and get a better understanding of how both languages deal with some key aspects (RAM, CPU, Open File Descriptors Count &amp;amp; OS Threads Count) when under &lt;em&gt;severe&lt;/em&gt; pressure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How were the metrics gathered?
&lt;/h2&gt;

&lt;p&gt;If you want to know how this benchmark was profiled, expand the below section, otherwise, you can skip directly to the results 🤓.&lt;/p&gt;

&lt;p&gt;
  Behind the scenes
  &lt;h3&gt;
  
  
  Tech Stack
&lt;/h3&gt;

&lt;p&gt;It was created using the following technologies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1x EC2 t2.micro (the API's command center)&lt;/li&gt;
&lt;li&gt;1x EC2 t2.xlarge (the request-launching gun)&lt;/li&gt;
&lt;li&gt;A Postgres RDS for data persistence&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/tsenart/vegeta" rel="noopener noreferrer"&gt;Vegeta&lt;/a&gt; for unleashing HTTP load&lt;/li&gt;
&lt;li&gt;Golang 1.21.4 and Node.js 21.4.0 for the API showdown&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://opentofu.org/" rel="noopener noreferrer"&gt;Open Tofu&lt;/a&gt; for automagically spinning up our servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; The API server has only a 1-core processor with 1GB of RAM. This post shows the efficiency of both approaches in a very limited environment. You can check the full code &lt;a href="https://github.com/ocodista/api-benchmark/tree/main" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Flow Diagram
&lt;/h3&gt;

&lt;p&gt;How does the communication happen?&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv44smy5cgsexicfkvcq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv44smy5cgsexicfkvcq.png" alt="Flow Diagram" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both servers are located within the same VPC in AWS, ensuring minimal latency. However, the RDS, although situated in the same AWS Region (sa-east-1), operates in another VPC, introducing a more realistic latency.&lt;/p&gt;

&lt;p&gt;This is good because, in a real-world scenario, &lt;strong&gt;there will be latency&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Manual RDS Setup
&lt;/h3&gt;

&lt;p&gt;Unfortunately, I wasn't able to set up the Postgres RDS with OpenTofu, (cough cough: &lt;em&gt;skill issue&lt;/em&gt;) so I had to manually craft it on AWS and execute the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;TRUNCATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ok, with everything in place, it's showtime! &lt;/p&gt;

&lt;h3&gt;
  
  
  Environment Initialization
&lt;/h3&gt;

&lt;p&gt;Start the environment with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tofu apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/ocodista/api-benchmark/blob/main/tofu/main.tf" rel="noopener noreferrer"&gt;Full main.tf file here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What does it do?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launches 1 VPC + 2 Subnets&lt;/li&gt;
&lt;li&gt;Boots up 2 Ubuntu Servers, executing specific scripts&lt;/li&gt;
&lt;li&gt;Install Node + Golang on the API server&lt;/li&gt;
&lt;li&gt;Sets up Vegeta on the Gun server&lt;/li&gt;
&lt;li&gt;Deploys the API and load-tester code&lt;/li&gt;
&lt;li&gt;Generates 2 SSH scripts for connectivity (ssh_connect_api.sh &amp;amp; ssh_connect_gun.sh)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring Process
&lt;/h3&gt;

&lt;p&gt;With this setup, I can access the API server and initiate either the Node or Go API.&lt;/p&gt;

&lt;p&gt;Concurrently, start the &lt;em&gt;monitor_process.sh&lt;/em&gt; to snag metrics like RAM, CPU, Threads Count, &amp;amp; File Descriptors Count and save to a &lt;code&gt;.csv&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;All is done based on the process ID of the running API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh4g4a1aunrwkdk7ukqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh4g4a1aunrwkdk7ukqf.png" alt="Monitoring flow" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ocodista/api-benchmark/blob/main/monitor_process.sh" rel="noopener noreferrer"&gt;Check the script here!&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Script Parameters
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Process ID&lt;/li&gt;
&lt;li&gt;The number of requests per second (to name the CSV file correctly)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the API is running, I get the process ID using &lt;code&gt;console.log(process.pid)&lt;/code&gt; on Node or &lt;code&gt;fmt.Printf("ID: %d", os.Getpid())&lt;/code&gt; on Golang.&lt;/p&gt;

&lt;p&gt;Then, I can simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./monitor_process.sh 2587 2000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command monitors our process, updating a &lt;em&gt;.csv&lt;/em&gt; file named &lt;em&gt;process_stats_2000.csv&lt;/em&gt; every second with fresh data.&lt;/p&gt;

&lt;p&gt;Ok, now let's analyze the results, compare both APIs, and see what learnings can we squeeze from it, let's get started!&lt;br&gt;
&lt;/p&gt;

&lt;br&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  2,000 Requests per Second
&lt;/h2&gt;

&lt;p&gt;Alright, for this first step, I ran &lt;a href="https://github.com/ocodista/api-benchmark/blob/main/load-tester/vegeta/metrics.sh" rel="noopener noreferrer"&gt;this Vegeta script&lt;/a&gt; that fires 2,000 requests per second over 30s to the Server API.&lt;/p&gt;

&lt;p&gt;This was done inside the &lt;em&gt;Gun Server&lt;/em&gt; by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./metrics.sh 2000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which produces the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Starting Vegeta attack for 30s at 2000 requests per second...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I combined the results in some beautiful charts, let's take a look at them:&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency x Seconds
&lt;/h3&gt;

&lt;p&gt;By looking at the latency chart, we can see that Golang struggled a lot initially, taking ~5s to stabilize.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wgaus6ma4grkab90430.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wgaus6ma4grkab90430.png" alt="2,000 reqs/s latency over seconds" width="768" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This may have been a one-time anomaly, but as I won't redo the test and the metrics are all correct, I'll call this one a &lt;strong&gt;lucky shot&lt;/strong&gt; for Node.&lt;/p&gt;

&lt;p&gt;Node kept a consistent latency throughout most of the test, with spikes at 12s and 20s.&lt;/p&gt;

&lt;p&gt;Golang, on the other hand, had some trouble stabilizing the latency at the beginning, costing the pole position. However, It went well after that, by keeping the latency around 230ms.&lt;/p&gt;

&lt;h3&gt;
  
  
  File Descriptors Count
&lt;/h3&gt;

&lt;p&gt;This one is interesting.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq846somzds7a1xjg1235.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq846somzds7a1xjg1235.png" alt="FD Count 2,000 reqs/s" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Linux, a new socket and a corresponding &lt;em&gt;File Descriptor (FD)&lt;/em&gt; are created for each incoming server connection. These FDs store connection information.&lt;/p&gt;

&lt;p&gt;On Ubuntu, the default &lt;em&gt;soft&lt;/em&gt; limit for open file descriptors is 1024.&lt;/p&gt;

&lt;p&gt;However, both Go and Node ignore the soft limit and always use the hard limit. This can be verified by accessing the &lt;code&gt;/proc/$PID/limits&lt;/code&gt; FD after the node/go process has started.&lt;/p&gt;

&lt;p&gt;You can use the command &lt;code&gt;ulimit -n&lt;/code&gt; to see the OS soft limit of open file descriptors of the current shell session.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b74e6p22e88piwg3ziu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b74e6p22e88piwg3ziu.png" alt="Node limits" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ok then, this means that the OS does not interfere with the number of open FDs; the programming language manages it.&lt;/p&gt;

&lt;p&gt;In this test, Node kept a lower, but irregular, number of open FDs while Golang spiked until 8.000, stabilized, and remained consistent until the end.&lt;/p&gt;
&lt;h3&gt;
  
  
  Threads Count
&lt;/h3&gt;

&lt;p&gt;Wasn't Node.js single-threaded? 🤯&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91s9c0030tz9xd2ov7xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F91s9c0030tz9xd2ov7xm.png" alt="Threads Count 2,000 reqs/s" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, &lt;strong&gt;no&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By default, Node starts a few threads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1 Main Thread&lt;/strong&gt;: Executes JavaScript code and handles the event loop.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;4 Worker Threads&lt;/strong&gt; (default &lt;a href="https://docs.libuv.org/en/v1.x/threadpool.html" rel="noopener noreferrer"&gt;libuv thread pool&lt;/a&gt;)

&lt;ul&gt;
&lt;li&gt;Handles &lt;strong&gt;blocking&lt;/strong&gt; async I/O such as DNS lookup queries, crypto module, and some file I/O operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;V8 Threads&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1 Compiler Thread&lt;/strong&gt;: Compiles JavaScript into native machine code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1 Profiler Thread&lt;/strong&gt;: Collects performance profiles for optimizations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2 or more Garbage Collector Threads&lt;/strong&gt;: Manages memory allocation and garbage collection.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Additional Internal Threads&lt;/strong&gt;: Number varies, for various Node.js and V8 background tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I noticed that, at the startup of the Node Process, it created 11 OS threads and once the requests started arriving, the count jumped to 15 OS threads and stayed there.&lt;/p&gt;

&lt;p&gt;Go, on the other hand, kept 4 stable OS threads.&lt;/p&gt;
&lt;h3&gt;
  
  
  RAM
&lt;/h3&gt;

&lt;p&gt;
  TL;DR
  &lt;ul&gt;
&lt;li&gt;Node.js: 

&lt;ul&gt;
&lt;li&gt;Consistently low RAM usage, between 75MB and 120MB.&lt;/li&gt;
&lt;li&gt;Utilizes an Event-Loop for I/O operations, avoiding new threads.&lt;/li&gt;
&lt;li&gt;More about Node's Event Loop: &lt;a href="https://dev.to/ocodista/inside-nodejs-exploring-asynchronous-io-4bg1"&gt;Exploring Asynchronous I/O in Node.js&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Go:

&lt;ul&gt;
&lt;li&gt;Higher initial RAM usage, stabilizing at 300MB.&lt;/li&gt;
&lt;li&gt;Spawns a new goroutine for each network request.&lt;/li&gt;
&lt;li&gt;Goroutines are lighter than OS threads but still impact memory under load.&lt;/li&gt;
&lt;li&gt;Insight into Go's Runtime scheduler: &lt;a href="https://www.youtube.com/watch?v=YHRO5WQGh0" rel="noopener noreferrer"&gt;Go Runtime Scheduler Talk&lt;/a&gt;.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;






&lt;/p&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjc388vxey0lrwnr9hjn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjc388vxey0lrwnr9hjn8.png" alt="RAM Usage 2,000 reqs/s" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;
  Explanation
  &lt;br&gt;
Node kept a lower and more stable RAM usage, between 75MB and 120MB, throughout the test. 

&lt;p&gt;Meanwhile, Go's RAM usage increased in the first seconds until it stabilized at 300MB (almost tripling Node's peak).&lt;/p&gt;

&lt;p&gt;This difference can be explained due to how both languages deal with asynchronous operations, like I/O database communication.&lt;/p&gt;

&lt;p&gt;Node uses an Event-Loop approach, which means it &lt;strong&gt;doesn't create new thread&lt;/strong&gt;. In contrast, Go often spawns a new goroutine for each request, which increases memory usage. The &lt;em&gt;goroutine&lt;/em&gt; is a lightweight thread managed by the Go Runtime. &lt;/p&gt;

&lt;p&gt;Even though lighter than an OS Thread, it still leaves a memory footprint when under heavy load.&lt;/p&gt;

&lt;p&gt;For insights on the Node Event Loop, check &lt;a href="https://dev.to/ocodista/inside-nodejs-exploring-asynchronous-io-4bg1"&gt;this blog post&lt;/a&gt; I wrote.&lt;/p&gt;

&lt;p&gt;To better understand the Go Runtime scheduler, please watch &lt;a href="https://www.youtube.com/watch?v=YHRO5WQGh0k&amp;amp;t=2s" rel="noopener noreferrer"&gt;this phenomenal talk&lt;/a&gt; - one of the best I've ever watched.&lt;br&gt;
&lt;/p&gt;

&lt;br&gt;
&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU
&lt;/h3&gt;

&lt;p&gt;Node was able to use less CPU than Go at this one, this may be because the Go Runtime is more complex and requires more steps/calculations than the libuv's Event Loop.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9w55m8w8pmwpfvy4jty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc9w55m8w8pmwpfvy4jty.png" alt="CPU Usage 2,000 reqs/s" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overall
&lt;/h3&gt;

&lt;p&gt;I must be honest: I was surprised by this result. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node&lt;/strong&gt; won this one 🏆.&lt;/p&gt;

&lt;p&gt;It showcased:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Superior p99 latency, responding in under 1.2s for 99% of requests, compared to Go's 4.31s&lt;/li&gt;
&lt;li&gt;Faster average latency, clocking in at 147ms versus Go's 459ms, 3.1x faster! &lt;/li&gt;
&lt;li&gt;Significantly smaller maximum latency, peaking at just 1.5s against Go's 6.4s, which was 4.2 slower. &lt;em&gt;(c'mon Gopher, you're looking bad!)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6ln4v0rn9qry64liksn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6ln4v0rn9qry64liksn.png" alt="Go vs Node 2,000 requests/s" width="800" height="961"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3,000 Requests per Second
&lt;/h2&gt;

&lt;p&gt;Now let's redo the test, send 3,000 requests/s over 30s for each API, and see the results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency x Seconds
&lt;/h3&gt;

&lt;p&gt;While Go was able to keep a really stable latency with only two small spikes, Node was in some deep trouble and showcased a very inconsistent latency throughout the test.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgr8freorq6l149dfneqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgr8freorq6l149dfneqr.png" alt="3,000 reqs/s latency over second" width="768" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  File Descriptors Count
&lt;/h3&gt;

&lt;p&gt;Remember I told you that neither Node nor Go respects the &lt;em&gt;soft limit&lt;/em&gt; of the Open File Descriptors and &lt;strong&gt;both languages&lt;/strong&gt; manage it by themselves?&lt;/p&gt;

&lt;p&gt;Here's a fun fact:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6lfs8cku3l506ugvcoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6lfs8cku3l506ugvcoa.png" alt="FDs count 3,000 requests Node vs Go" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Golang was able to process, handle, and deliver more requests, in a shorter time, using fewer resources by &lt;strong&gt;setting a "hard" limit of open FDs&lt;/strong&gt; at each period of the test (based on some metric that I'm not sure which one).&lt;/p&gt;

&lt;p&gt;This is super cool!&lt;/p&gt;

&lt;p&gt;Look at how Go managed its FDs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;8 FDs&lt;/strong&gt;: In the first 0-3 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;1,590 FDs&lt;/strong&gt;: Between 4-17 seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2,225 FDs&lt;/strong&gt;: Between 18-31 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Node, on the other hand, didn't interfere with the open file descriptors like Go did. You can see that on the chart.&lt;/p&gt;

&lt;p&gt;Empirically, it seems that Go is pre-allocating (or pre-opening) File Descriptors at some rate and reusing them instead of generating one for each connection at the time they arrive.&lt;/p&gt;

&lt;p&gt;I'm not sure exactly how they do that, though, feel free to comment if you have some hint 😄&lt;/p&gt;

&lt;p&gt;I found some good reads about this: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://groups.google.com/g/golang-nuts/c/RcDe_scgLM0" rel="noopener noreferrer"&gt;Does net/http have connection pool?&lt;/a&gt; - talks about Go's &lt;code&gt;net/http&lt;/code&gt; package and how it manages connections.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://morsmachine.dk/netpoller" rel="noopener noreferrer"&gt;The Go Netpoller&lt;/a&gt; - article that explains about the Go Netpoller.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Threads Count
&lt;/h3&gt;

&lt;p&gt;Ok, something worth noticing happened on this test.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3knhsrwtd0ur50opnsgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3knhsrwtd0ur50opnsgk.png" alt="Threads Count 3,000 requests Node vs Go" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Node&lt;/strong&gt;: peaked from 11 to 15 OS Threads as the requests started arriving. I believe this is due to the DNS Lookup operations, as briefly mentioned in &lt;a href="https://github.com/nodejs/node/issues/8436" rel="noopener noreferrer"&gt;this issue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Go&lt;/strong&gt;: Stepped up its game from 4 to 5 OS Threads. It's the Runtime Scheduler orchestrating the show, Go is smart enough to pack multiple Goroutines into each OS Thread. When it gets clumsy, it smoothly starts a new OS thread.&lt;/p&gt;

&lt;p&gt;This approach is not just efficient; it's a masterclass in resource optimization, &lt;em&gt;squeezing&lt;/em&gt; every last bit of performance from the hardware. &lt;strong&gt;It is Amazing! 🚀&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  RAM
&lt;/h3&gt;

&lt;p&gt;At this time, you probably noticed that the Node.js line lasts longer than the Go line, well, this is because the API took more time to answer all the requests it received.&lt;/p&gt;

&lt;p&gt;This has also impacted the RAM usage. Remember that for the first test Node's RAM usage was way below Go's one?&lt;/p&gt;

&lt;p&gt;That's not the case when you have tons of connections hanging on the server waiting to be processed.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj84i750ut4nzd6grhzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj84i750ut4nzd6grhzb.png" alt="RAM usage 3,000 requests Node vs Go" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU
&lt;/h3&gt;

&lt;p&gt;This time, Node required much more CPU Usage than that and was able to keep the usage below 35% while Node peaked at 64%.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok21gapl1rnnjft5q90r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fok21gapl1rnnjft5q90r.png" alt="CPU Usage 3,000 requests Node vs Go" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overall
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8de4u93bwrourglek46r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8de4u93bwrourglek46r.png" alt="Overall 3,000 req/s Node vs Go" width="800" height="961"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎉 We have a fight! 🎉&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The dispute is open and Golang was way superior at this one, let's look at the numbers:&lt;/p&gt;

&lt;p&gt;Golang had:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lower Latencies&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;p99: 738.873ms against 30.001s, &lt;strong&gt;40 times lower&lt;/strong&gt; than Node.&lt;/li&gt;
&lt;li&gt;Average: 60.454ms versus 7.079s - &lt;strong&gt;118 times faster&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Maximum: Go peaked at 1.33s, while Node reached the sky with 30.0004s.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Perfect Success Rate (100%)&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Against 91.93% from Node, which had some requests failing.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;That was a massacre, it was like comparing a new sports car with a &lt;em&gt;Fusquinha&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;
  Detailed comparison
  &lt;h4&gt;
  
  
  Node.js Performance Metrics:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total Requests&lt;/strong&gt;: 86,922 with a rate of 2,897.33 per second.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt;: 1,449.29 requests per second.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Total: 55.135 seconds.&lt;/li&gt;
&lt;li&gt;Attack Phase: 30.001 seconds.&lt;/li&gt;
&lt;li&gt;Wait Time: 25.134 seconds.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latencies&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Minimum: 3.458 ms.&lt;/li&gt;
&lt;li&gt;Mean: 7.079 seconds.&lt;/li&gt;
&lt;li&gt;Median (50th Percentile): 6.068 seconds.&lt;/li&gt;
&lt;li&gt;90th Percentile: 9.563 seconds.&lt;/li&gt;
&lt;li&gt;95th Percentile: 26.814 seconds.&lt;/li&gt;
&lt;li&gt;99th Percentile: 30.001 seconds.&lt;/li&gt;
&lt;li&gt;Maximum: 30.004 seconds.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transfer&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Bytes In: 2,077,556 (average 23.90 bytes/request).&lt;/li&gt;
&lt;li&gt;Bytes Out: 7,351,352 (average 84.57 bytes/request).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success Ratio&lt;/strong&gt;: 91.93%.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status Codes&lt;/strong&gt;: 7016 failures, 79,906 successes (201 code).&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Golang Performance Metrics:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Total Requests&lt;/strong&gt;: 90,001 with a rate of 3,000.09 per second.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt;: 2,999.89 requests per second.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Duration&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Total: 30.001 seconds.&lt;/li&gt;
&lt;li&gt;Attack Phase: 29.999 seconds.&lt;/li&gt;
&lt;li&gt;Wait Time: 2.035 ms.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Latencies&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Minimum: 1.371 ms.&lt;/li&gt;
&lt;li&gt;Mean: 60.454 ms.&lt;/li&gt;
&lt;li&gt;Median (50th Percentile): 4.773 ms.&lt;/li&gt;
&lt;li&gt;90th Percentile: 194.115 ms.&lt;/li&gt;
&lt;li&gt;95th Percentile: 453.031 ms.&lt;/li&gt;
&lt;li&gt;99th Percentile: 736.873 ms.&lt;/li&gt;
&lt;li&gt;Maximum: 1.33 seconds.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Transfer&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Bytes In: 2,430,027 (average 27.00 bytes/request).&lt;/li&gt;
&lt;li&gt;Bytes Out: 8,280,092 (average 92.00 bytes/request).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Success Ratio&lt;/strong&gt;: 100%.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Status Codes&lt;/strong&gt;: All 90,001 requests were successful (201 code).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;: Go had a higher throughput compared to Node.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latencies&lt;/strong&gt;: Node exhibited significantly higher latencies, especially in the mean, 95th, and 99th percentiles.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Success Rate&lt;/strong&gt;: Go achieved a 100% success rate, whereas Node had a lower success rate with some failed requests.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;/p&gt;

&lt;h2&gt;
  
  
  5,000 Requests per second
&lt;/h2&gt;

&lt;p&gt;Final round, let's see how both languages deal with &lt;strong&gt;severe pressure&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency x Seconds
&lt;/h3&gt;

&lt;p&gt;Go was able to keep a very low, stable latency until ~20 seconds, when it started to present some troubles, that caused peaks of 5s, which is very slow.&lt;/p&gt;

&lt;p&gt;Node presented problems throughout the entire test, responding with latencies between 5-10s.&lt;/p&gt;

&lt;p&gt;It's nice to notice that even in a very stressful test, Go could be stable over the entire test.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ozcs96nf6r7lfyii25e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ozcs96nf6r7lfyii25e.png" alt="5,000 requests/s latency" width="768" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  File Descriptors Count
&lt;/h3&gt;

&lt;p&gt;Once again we can see how stable are the Open File Descriptors of Golang versus how unmanaged, linear-growing they are for Node.js&lt;/p&gt;

&lt;p&gt;I believe that this is directly related to the Go Network Poller that reuses (and maybe pre-creates) File Descriptors instead of creating one at the time each request arrives.&lt;/p&gt;

&lt;p&gt;I wonder if Node could benefit from such an approach, will definitely check this out 😅&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8dkwwurh0kyazz1g5cf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8dkwwurh0kyazz1g5cf.png" alt="5,000 requests/s Go vs Node FD Count" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Threads Count
&lt;/h3&gt;

&lt;p&gt;In this chart we can see that Node started with 11 OS Threads and jumped to 15 whenever connections started arriving, while Golang kept 4 OS threads for the majority of the test, increasing to 5 at the end.&lt;/p&gt;

&lt;p&gt;Go's strategy seems to be more stable under heavy loads.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0vxstqc375vwutj4hjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0vxstqc375vwutj4hjv.png" alt="5,000 requests/ Go vs Node Threads Count" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  RAM
&lt;/h3&gt;

&lt;p&gt;Node.js showed a linear increase in RAM usage, whereas Go's increase was step-like, similar to climbing a ladder. &lt;/p&gt;

&lt;p&gt;This pattern in Go is due to its runtime actively managing resources and setting limits for &lt;em&gt;go routines&lt;/em&gt;, &lt;em&gt;OS threads&lt;/em&gt;, and &lt;em&gt;open file descriptors&lt;/em&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdopjnu5xr7l5y73iot46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdopjnu5xr7l5y73iot46.png" alt="5,000 requests/s Go vs Node RAM Usage" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU
&lt;/h3&gt;

&lt;p&gt;The CPU usage pattern is very similar for both languages, suggesting that this may be outside of the language control, being delegated to the OS.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu34devlxxqppadd2bgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqu34devlxxqppadd2bgp.png" alt="5,000 requests/s Go vs Node CPU Usage" width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Overall
&lt;/h3&gt;

&lt;p&gt;Go excels again with a higher Success Rate and lower p99, average, min, and max latency. &lt;/p&gt;

&lt;p&gt;Given that &lt;strong&gt;Go is a compiled language&lt;/strong&gt;, and Node.js (JavaScript) is interpreted, &lt;strong&gt;this outcome is expected&lt;/strong&gt;. &lt;br&gt;
Compiled languages typically have fewer steps before executing machine code.&lt;/p&gt;

&lt;p&gt;Despite its inherent challenges, Node.js managed to successfully process 89.38% of the requests.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuak8qrtn99aleduz7f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffuak8qrtn99aleduz7f5.png" alt="Go vs Node 5,000 requests/s overal" width="800" height="961"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Considerations
&lt;/h2&gt;

&lt;p&gt;Thank you for taking the time to read this blog post 🙏 &lt;/p&gt;

&lt;p&gt;It's no surprise that Go, a compiled language focused on concurrency and parallelism by design, came out on top. Still, it was interesting to see how it all played out.&lt;/p&gt;

&lt;p&gt;It was cool to see how Go and Node.js handle tasks differently and how that impacts the computer's resources. &lt;/p&gt;

&lt;p&gt;I've summed up the key points below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open File Descriptor Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Go&lt;/strong&gt;: Demonstrates a strategy of pre-allocation and reuse for File Descriptors, thanks to its intelligent network poller and resource management. This approach contributes to efficient handling and scalability under heavy network loads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js&lt;/strong&gt;: Shows a dynamic, maybe unmanaged pattern in File Descriptor usage, reflecting its approach to handling server connections and opening FDs one by one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Thread Management and Node.js
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Go&lt;/strong&gt;: Maintains a stable, &lt;strong&gt;low&lt;/strong&gt; OS thread count, highlighting the efficiency of its runtime scheduler in optimizing thread usage, especially under heavy stress 🤯.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Node.js&lt;/strong&gt;: Contrary to popular belief, Node.js uses multiple threads for tasks like DNS lookups, Garbage Collector (Hi V8), and blocking async I/O ops, it's &lt;strong&gt;not&lt;/strong&gt; just a single thread.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>javascript</category>
      <category>go</category>
      <category>webdev</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Inside Node.js: Exploring Asynchronous I/O</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Mon, 11 Dec 2023 18:22:39 +0000</pubDate>
      <link>https://dev.to/ocodista/inside-nodejs-exploring-asynchronous-io-4bg1</link>
      <guid>https://dev.to/ocodista/inside-nodejs-exploring-asynchronous-io-4bg1</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;How Node Handles Asynchronous Code&lt;/li&gt;
&lt;li&gt;
Asynchronous Operations: What Are They?

&lt;ul&gt;
&lt;li&gt;Blocking vs Non-Blocking Asynchronous Operation&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Experiments with Blocking Functions&lt;/li&gt;

&lt;li&gt;Experiments with Non-Blocking Functions&lt;/li&gt;

&lt;li&gt;

Non-Blocking Asynchronous Operations and OS

&lt;ul&gt;
&lt;li&gt;
Understanding File Descriptors

&lt;ul&gt;
&lt;li&gt;What is a FD?&lt;/li&gt;
&lt;li&gt;FD and Non-Blocking I/O&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Monitoring FDs with syscalls

&lt;ul&gt;
&lt;li&gt;Understanding select&lt;/li&gt;
&lt;li&gt;Epoll&lt;/li&gt;
&lt;li&gt;io_uring&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Recently, I've been studying asynchronous code execution in Node.js.&lt;/p&gt;

&lt;p&gt;I ended up learning (and writing) a lot, from an article about &lt;a href="https://dev.to/ocodista/javascript-event-loop-breaking-down-the-mystery-2c9f"&gt;how the Event Loop works&lt;/a&gt; to a Twitter thread explaining &lt;a href="https://twitter.com/ocodista/status/1696684507917631841?s=20" rel="noopener noreferrer"&gt;who waits for the http request to finish&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you want, you can also access the mind map I created before writing this post by clicking &lt;a href="https://whimsical.com/node-async-CzpmdNE7HMzsp5uPDeJpve" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Now, let's get to the point!&lt;/p&gt;

&lt;h2&gt;
  
  
  How Node Handles Asynchronous Code
&lt;/h2&gt;

&lt;p&gt;In Node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All JavaScript code is executed in the main thread.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;libuv&lt;/strong&gt; library is responsible for handling I/O (In/Out) operations, i.e., &lt;strong&gt;asynchronous&lt;/strong&gt; operations.&lt;/li&gt;
&lt;li&gt;By default, libuv provides 4 &lt;em&gt;worker threads&lt;/em&gt; for Node.js.

&lt;ul&gt;
&lt;li&gt;These threads will only be used when &lt;strong&gt;blocking&lt;/strong&gt; asynchronous operations are performed, in which case they will block one of the libuv threads (which are OS threads) instead of the main Node execution thread.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;There are both blocking and non-blocking operations, and most of the current asynchronous operations are &lt;strong&gt;non-blocking&lt;/strong&gt;.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Asynchronous Operations: What Are They?
&lt;/h2&gt;

&lt;p&gt;Generally, there's confusion when it comes to asynchronous operations.&lt;/p&gt;

&lt;p&gt;Many believe it means something happens in the background, in parallel, at the same time, or in another thread.&lt;/p&gt;

&lt;p&gt;In reality, an asynchronous operation is an operation that won't return now, but later.&lt;/p&gt;

&lt;p&gt;They depend on communication with external agents, and these agents might not have an immediate response to your request.&lt;/p&gt;

&lt;p&gt;We're talking about I/O (input/output) operations.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reading a file&lt;/strong&gt;: data &lt;em&gt;leaves&lt;/em&gt; the disk and &lt;em&gt;enters&lt;/em&gt; the application.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Writing to a file&lt;/strong&gt;: data &lt;em&gt;leaves&lt;/em&gt; the application and &lt;em&gt;enters&lt;/em&gt; the disk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Operations&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;HTTP requests, for example.&lt;/li&gt;
&lt;li&gt;The application &lt;strong&gt;sends&lt;/strong&gt; an &lt;em&gt;http request&lt;/em&gt; to some server and &lt;strong&gt;receives&lt;/strong&gt; the data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4ve8g8hmkyilvezttqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4ve8g8hmkyilvezttqw.png" alt="Node calls libuv, libuv calls syscalls, event loop runs on the main thread" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Blocking vs Non-Blocking Asynchronous Operation
&lt;/h3&gt;

&lt;p&gt;In the modern world, &lt;del&gt;people don't talk to each other&lt;/del&gt; most asynchronous operations are non-blocking.&lt;/p&gt;

&lt;p&gt;But wait, does that mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;libuv provides 4 threads (&lt;em&gt;by default&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;they "take care of" the &lt;strong&gt;blocking&lt;/strong&gt; I/O operations.&lt;/li&gt;
&lt;li&gt;the vast majority of operations are &lt;strong&gt;non-blocking&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Seems kind of useless, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fch6qfruusrpzgmm8cyls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fch6qfruusrpzgmm8cyls.png" alt="Libuv worker threads for blocking asynchronous ops" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this question in mind, I decided to do some experiments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experiments with Blocking Functions
&lt;/h2&gt;

&lt;p&gt;First, I tested an asynchronous CPU-intensive function, one of the &lt;strong&gt;rare&lt;/strong&gt; asynchronous &lt;strong&gt;blocking&lt;/strong&gt; functions in Node.&lt;/p&gt;

&lt;p&gt;The used code was as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// index.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pbkdf2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TEN_MILLIONS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;e7&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// CPU-intensive asynchronous function&lt;/span&gt;
&lt;span class="c1"&gt;// Goal: Block a worker thread&lt;/span&gt;
&lt;span class="c1"&gt;// Original goal: Generate a passphrase&lt;/span&gt;
&lt;span class="c1"&gt;// The third parameter is the number of iterations&lt;/span&gt;
&lt;span class="c1"&gt;// In this example, we are passing 10 million&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;runSlowCryptoFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;pbkdf2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;secret&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;

salt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;TEN_MILLIONS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sha512&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Here we want to know how many worker threads libuv will use&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Thread pool size is &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runAsyncBlockingOperations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runAsyncBlockingOperation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runIndex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;runSlowCryptoFunction&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Finished run &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;runIndex&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; in &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nf"&gt;runAsyncBlockingOperation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;runAsyncBlockingOperation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nf"&gt;runAsyncBlockingOperations&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To validate the operation, I ran the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IMPORTANT:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UV_THREADPOOL_SIZE: It's an environment variable that determines how many libuv &lt;em&gt;worker threads&lt;/em&gt; Node will start.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Thread pool size is 1
Finished run 1 &lt;span class="k"&gt;in &lt;/span&gt;3.063s
Finished run 2 &lt;span class="k"&gt;in &lt;/span&gt;6.094s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is, with only one thread, each execution took ~3 seconds, and they occurred sequentially. One after the other.&lt;/p&gt;

&lt;p&gt;Now, I decided to do the following test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the result was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Thread pool size is 2
Finished run 2 &lt;span class="k"&gt;in &lt;/span&gt;3.225s
Finished run 1 &lt;span class="k"&gt;in &lt;/span&gt;3.243s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With that, it's proven that LIBUV's &lt;em&gt;Worker Threads&lt;/em&gt; in Node.js handle blocking asynchronous operations.&lt;/p&gt;

&lt;p&gt;But what about the &lt;strong&gt;non-blocking&lt;/strong&gt; ones? If no one waits for them, how do they work?&lt;/p&gt;

&lt;p&gt;I decided to write another function to test it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Experiments with Non-Blocking Functions
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;fetch&lt;/em&gt; function (native to Node) performs a non-blocking network asynchronous operation.&lt;/p&gt;

&lt;p&gt;With the following code, I redid the test of the first experiment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//non-blocking.js&lt;/span&gt;
&lt;span class="c1"&gt;// Here we want to know how many worker threads libuv will use&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Thread pool size is &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://www.google.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Fetch 1 returned in &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://www.google.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Fetch 2 returned in &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And I executed the script with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 node non-blocking.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Thread pool size is 1
Fetch 1 returned &lt;span class="k"&gt;in &lt;/span&gt;0.391s
Fetch 2 returned &lt;span class="k"&gt;in &lt;/span&gt;0.396s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So, I decided to test with two threads, to see if anything changed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 node non-blocking.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Thread pool size is 2
Fetch 2 returned in 0.402s
Fetch 1 returned in 0.407s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, I observed that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Having more threads running in LIBUV does not help in the execution of non-blocking asynchronous operations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But then, I questioned again, if no libuv thread is "waiting" for the request to return, how does this work?&lt;/p&gt;

&lt;p&gt;My friend, that's when I fell into a gigantic hole of research and knowledge about the operation of:&lt;/p&gt;

&lt;h3&gt;
  
  
  Non-Blocking Asynchronous Operations and OS
&lt;/h3&gt;

&lt;p&gt;The Operating System has evolved quite a bit over the years to deal with non-blocking I/O operations, this is done through &lt;em&gt;syscalls&lt;/em&gt;, they are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;select/poll&lt;/strong&gt;: These are the traditional ways of dealing with non-blocking I/O and are generally considered less efficient.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IOCP&lt;/strong&gt;: Used in Windows for asynchronous operations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;kqueue&lt;/strong&gt;: A method for MacOS and BSD.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;epoll&lt;/strong&gt;: Efficient and used in Linux. Unlike select, it is not limited by the number of FDs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;io_uring&lt;/strong&gt;: An evolution of epoll, bringing performance improvements and a queue-based approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To understand better, we need to dive into the details of non-blocking I/O operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding File Descriptors
&lt;/h3&gt;

&lt;p&gt;To explain non-blocking I/O, I need to quickly explain the concept of File Descriptors (FDs).&lt;/p&gt;

&lt;h4&gt;
  
  
  What is a FD?
&lt;/h4&gt;

&lt;p&gt;It's a numerical index of a table maintained by the &lt;em&gt;kernel&lt;/em&gt;, where each record has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource type (such as file, socket, device).&lt;/li&gt;
&lt;li&gt;Current position of the file pointer.&lt;/li&gt;
&lt;li&gt;Permissions and flags, defining modes like read or write.&lt;/li&gt;
&lt;li&gt;Reference to the resource's data structure in the kernel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They are fundamental for I/O management.&lt;/p&gt;

&lt;h4&gt;
  
  
  FD and Non-Blocking I/O
&lt;/h4&gt;

&lt;p&gt;When initiating a non-blocking I/O operation, Linux associates an FD with it without interrupting (blocking) the process's execution.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;Imagine you want to read the contents of a very large file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blocking approach&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The process calls the &lt;em&gt;read file&lt;/em&gt; function.&lt;/li&gt;
&lt;li&gt;The process waits while the OS reads the file's content.

&lt;ul&gt;
&lt;li&gt;The process is &lt;strong&gt;blocked&lt;/strong&gt; until the OS finishes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Non-blocking approach&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The process requests &lt;em&gt;asynchronous&lt;/em&gt; read.&lt;/li&gt;
&lt;li&gt;The OS starts reading the content and returns an FD to the process.&lt;/li&gt;
&lt;li&gt;The process isn't locked up and can do other things.&lt;/li&gt;
&lt;li&gt;Periodically, the process calls a &lt;em&gt;syscall&lt;/em&gt; to check if the reading is finished.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The process decides the mode of reading through the &lt;a href="https://man7.org/linux/man-pages/man2/fcntl.2.html" rel="noopener noreferrer"&gt;fcntl&lt;/a&gt; function with the &lt;em&gt;O_NONBLOCK&lt;/em&gt; flag, but this is secondary at the moment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring FDs with syscalls
&lt;/h3&gt;

&lt;p&gt;To efficiently observe multiple FDs, OSs rely on some &lt;em&gt;syscalls&lt;/em&gt;:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Understanding select:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Receives a list of FDs.&lt;/li&gt;
&lt;li&gt;Blocks the process until one or more FDs are ready for the specified operation (read, write, exception).&lt;/li&gt;
&lt;li&gt;After the syscall returns, the program can iterate over the FDs to identify those ready for I/O.&lt;/li&gt;
&lt;li&gt;Uses a search algorithm that is O(n).

&lt;ul&gt;
&lt;li&gt;Inefficient, slow, tiresome with many FDs.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Epoll&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;An evolution of &lt;em&gt;select&lt;/em&gt;, it uses a self-balancing tree to store the FDs, making access time almost constant, O(1).&lt;/p&gt;

&lt;p&gt;Pretty fancy!&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an epoll instance with &lt;code&gt;epoll_create&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Associate the FDs with this instance using &lt;code&gt;epoll_ctl&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;epoll_wait&lt;/code&gt; to wait for activity on any of the FDs.&lt;/li&gt;
&lt;li&gt;Has a timeout parameter.

&lt;ul&gt;
&lt;li&gt;Extremely important and well utilized by the libuv &lt;em&gt;Event Loop&lt;/em&gt;!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysxrok3wm49fk52xlgwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysxrok3wm49fk52xlgwv.png" alt="Comparison of time between select and epoll" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Io_uring&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This is a game-changer.&lt;/p&gt;

&lt;p&gt;While &lt;strong&gt;epoll&lt;/strong&gt; significantly improved the performance of searching and handling FDs, io_uring rethinks the entire nature of I/O operations.&lt;/p&gt;

&lt;p&gt;And so, after understanding how it works, I wondered how nobody thought of this before!!!&lt;/p&gt;

&lt;p&gt;Recapping:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;select&lt;/strong&gt;: Receives a list of FDs, stores them sequentially (like an array), and checks each one for changes or activity, with complexity O(n).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;epoll&lt;/strong&gt;: Receives a list of FDs, stores them using a self-balancing tree, does not check each one individually, is more efficient, and does the same as &lt;strong&gt;select&lt;/strong&gt; but with complexity O(1).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Historically, the process was responsible for iterating over the returned FDs to know which have finished or not.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;io_uring&lt;/strong&gt;: What? Return a list? Do polling? Are you kidding? Ever heard of &lt;strong&gt;queues&lt;/strong&gt;?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It works using two main queues, in the form of rings (hence the name io-&lt;strong&gt;ring&lt;/strong&gt;).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 for submitting tasks.&lt;/li&gt;
&lt;li&gt;1 for completed tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simple, right?&lt;/p&gt;

&lt;p&gt;The process, when starting an I/O operation, &lt;strong&gt;queues&lt;/strong&gt; the operation using the &lt;em&gt;io_uring&lt;/em&gt; structure.&lt;/p&gt;

&lt;p&gt;Then, instead of calling &lt;em&gt;select&lt;/em&gt; or &lt;em&gt;epoll&lt;/em&gt; and iterating over the returned FDs, the process can choose to be notified when an I/O operation is completed.&lt;/p&gt;

&lt;p&gt;Polling? No. Queues!&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;With this knowledge, I now understand precisely the path Node takes to perform an asynchronous operation.&lt;/p&gt;

&lt;p&gt;If it's blocking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executes the asynchronous operation using libuv.&lt;/li&gt;
&lt;li&gt;Adds it to a libuv &lt;em&gt;worker thread&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;worker thread&lt;/em&gt; is blocked, waiting for the operation to finish.&lt;/li&gt;
&lt;li&gt;Once finished, the thread is responsible for placing the result in the &lt;em&gt;Event Loop&lt;/em&gt; in the MacroTasks queue.&lt;/li&gt;
&lt;li&gt;The callback is executed on the main thread.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If it's non-blocking:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executes the asynchronous operation using libuv.&lt;/li&gt;
&lt;li&gt;Libuv performs a non-blocking I/O &lt;em&gt;syscall&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;Performs &lt;em&gt;polling&lt;/em&gt; with the FDs until they resolve (epoll).&lt;/li&gt;
&lt;li&gt;From version 20.3.0, uses io_uring.

&lt;ul&gt;
&lt;li&gt;Queue-based approach for submission/completed operations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Upon receiving the event of operation completion:

&lt;ul&gt;
&lt;li&gt;libuv takes care of executing the callback on the main thread.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>linux</category>
      <category>typescript</category>
    </item>
    <item>
      <title>A Deep Dive into Green Threads and Node.js</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Sun, 10 Dec 2023 13:01:51 +0000</pubDate>
      <link>https://dev.to/ocodista/a-deep-dive-into-green-threads-and-nodejs-15c3</link>
      <guid>https://dev.to/ocodista/a-deep-dive-into-green-threads-and-nodejs-15c3</guid>
      <description>&lt;p&gt;Hey, how's it going?&lt;/p&gt;

&lt;p&gt;Recently, I've been studying concurrency and parallelism and came across a &lt;strong&gt;nice&lt;/strong&gt; &lt;em&gt;thing&lt;/em&gt; called Green Threads.&lt;/p&gt;

&lt;p&gt;In this post, I'll explain what they are, and how I tried implementing them on Node.js and failed miserably, hope you enjoy it!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prelude&lt;/li&gt;
&lt;li&gt;
Processes

&lt;ul&gt;
&lt;li&gt;Starting a Process&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Threads&lt;/li&gt;

&lt;li&gt;

Green/Virtual Threads

&lt;ul&gt;
&lt;li&gt;Implementing Green Threads&lt;/li&gt;
&lt;li&gt;Preemptive Scheduler&lt;/li&gt;
&lt;li&gt;Cooperative Scheduler&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Conclusion&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prelude
&lt;/h2&gt;

&lt;p&gt;Before explaining what are &lt;em&gt;green&lt;/em&gt; threads, we must first understand what is a &lt;strong&gt;thread&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To be honest, to be able to explain what those are, I'll need to talk about &lt;strong&gt;Processes&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Processes
&lt;/h3&gt;

&lt;p&gt;What is a process?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A process is the live version of a computer program. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As if the code you write were a recipe and the process you &lt;em&gt;execute&lt;/em&gt; was the &lt;strong&gt;execution&lt;/strong&gt; or &lt;strong&gt;process&lt;/strong&gt; to make the &lt;em&gt;recipe&lt;/em&gt; real.&lt;/p&gt;

&lt;p&gt;So... What is a process?&lt;/p&gt;

&lt;p&gt;It's the execution of a program. Your browser, text editor, image preview, and file explorer, they're all processes. More often than not, they start not one but multiple processes.&lt;/p&gt;

&lt;h4&gt;
  
  
  How to start a process?
&lt;/h4&gt;

&lt;p&gt;If you're a Windows user, by double-clicking some application or executing them on the CMD (Command Prompt) or Windows Terminal.&lt;/p&gt;

&lt;p&gt;If you're a Linux/MacOS user, you can also use the slow-but-common &lt;em&gt;double-click&lt;/em&gt; to open processes or, you can pretend to be a series tv hacker and use the terminal.&lt;/p&gt;

&lt;p&gt;Behind the hood, whenever you start a process:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The OS finds an unused section of main memory that is large enough for the application. &lt;br&gt;
The OS makes a copy of the application and its data in that section of the main memory. &lt;br&gt;
The OS sets up resources for the application. &lt;br&gt;
Finally, the OS starts the application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Good to know&lt;/strong&gt;: A process consumes &lt;em&gt;memory&lt;/em&gt; and has &lt;em&gt;instructions&lt;/em&gt; that'll be executed by the, oh no, &lt;strong&gt;the Processor&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's right, the Processor (aka CPU) processes instructions from a process and that requires memory. &lt;/p&gt;

&lt;p&gt;Each process has its own space in memory. &lt;/p&gt;

&lt;p&gt;A process cannot access another process's memory, even though it can communicate through messages sometimes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Threads
&lt;/h3&gt;

&lt;p&gt;What about threads?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Threads are lightweight processes. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A process can start many threads, and a thread shares the parent-process memory.&lt;/p&gt;

&lt;p&gt;Meaning that two threads started by the same process could potentially access the same memory.&lt;/p&gt;

&lt;p&gt;OS Threads have a cost: &lt;strong&gt;memory&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They're lighter than a new process but still require memory to be created, it may not seem too much, but if you take a look at this chart:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzxf1p8l0qvb4g1v1nmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzxf1p8l0qvb4g1v1nmv.png" alt="Comparison Between Apache vs Nginx" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can see that one approach (Apache) consumes a lot more memory than another (Nginx). &lt;/p&gt;

&lt;p&gt;This slide was presented at &lt;a href="https://www.youtube.com/watch?v=ztspvPYybIY" rel="noopener noreferrer"&gt;NodeJS's first talk ever&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I &lt;strong&gt;strongly recommend&lt;/strong&gt; you to watch, it's awesome!&lt;/p&gt;

&lt;p&gt;Apache consumes more memory because it uses a one-thread-per-request approach while Nginx uses a &lt;em&gt;non-blocking&lt;/em&gt; event-loop approach to handle new requests, i.e. &lt;strong&gt;no threads&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ok then, threads are lightweight processes but still consume memory and when you need to scale (to multiple threads) they start to get costy.&lt;/p&gt;

&lt;p&gt;How to solve that?&lt;/p&gt;

&lt;p&gt;Well, if you don't want to use an event loop with non-blocking I/O, there is a way: &lt;strong&gt;Green Threads&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Green/Virtual Threads
&lt;/h2&gt;

&lt;p&gt;Green threads are threads, but not OS threads.&lt;/p&gt;

&lt;p&gt;They're threads managed by the application or runtime, that does not involve creating OS threads. &lt;/p&gt;

&lt;p&gt;Therefore, they spend less memory as a single OS thread can have multiple &lt;em&gt;virtual&lt;/em&gt; threads.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to create a Green Thread?
&lt;/h3&gt;

&lt;p&gt;Well, you need to replicate what the OS does inside your application/runtime, meaning that you'll need to create an orchestrator (or &lt;a href="https://www.geeksforgeeks.org/process-schedulers-in-operating-system/" rel="noopener noreferrer"&gt;&lt;em&gt;scheduler&lt;/em&gt;&lt;/a&gt;) to switch between your &lt;em&gt;virtual threads&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;There are two types of schedulers: Preemptive and Cooperative.&lt;/p&gt;

&lt;p&gt;Cooperative: It will never block any thread/process. It's the job of the process/thread to give the control back to the scheduler.&lt;/p&gt;

&lt;p&gt;Preemptive: It will handle blocking and switching between threads/processes, it's not the job of the application to know when to return the control.&lt;/p&gt;

&lt;p&gt;Cooperative are harder to use for the end user, as they'll need to properly handle the stops and switches.&lt;/p&gt;

&lt;p&gt;Preemptive is easier to use but harder to create, as it's the job of the &lt;strong&gt;scheduler&lt;/strong&gt; to persist the state of the threads between switches and ensure consistency.&lt;/p&gt;

&lt;p&gt;Golang implemented &lt;strong&gt;goroutines&lt;/strong&gt; in it's runtime, by creating a [&lt;em&gt;preemptive&lt;/em&gt; scheduler].(&lt;a href="https://go.dev/src/runtime/preempt.go" rel="noopener noreferrer"&gt;https://go.dev/src/runtime/preempt.go&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;What about Node.js? We're getting there, hang tight!&lt;/p&gt;

&lt;h4&gt;
  
  
  Preemptive Scheduler
&lt;/h4&gt;

&lt;p&gt;Ok, let's try to create a preemptive scheduler in Node.js&lt;/p&gt;

&lt;p&gt;First, we need a way to add new &lt;em&gt;virtual threads&lt;/em&gt; to be called, that's easy!&lt;/p&gt;

&lt;p&gt;Let's create a &lt;em&gt;class&lt;/em&gt; (I know, JS devs usually hate classes, but I think they're useful sometimes).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PreemptiveScheduler&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;virtualThreads&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;virtualThreads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;addThread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;func&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;virtualThreads&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;func&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;longRunningTask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Started running task: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="nx"&gt;_000_000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Finished running task: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;timeEnd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scheduler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PreemptiveScheduler&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addThread&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;longRunningTask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="nx"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addThread&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;longRunningTask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, we need a way to be able to start the threads, but &lt;strong&gt;more important&lt;/strong&gt; we need a way to be able to &lt;strong&gt;stop&lt;/strong&gt; the running thread, and switch to another one.&lt;/p&gt;

&lt;p&gt;And that's where JavaScript (Node.js) cannot help us.&lt;/p&gt;

&lt;p&gt;We could add an interval of, let's say, 10ms to access different threads inside the &lt;code&gt;threads&lt;/code&gt; array, &lt;strong&gt;but&lt;/strong&gt;, once the function is started, there is no way to stop it.&lt;/p&gt;

&lt;p&gt;JavaScript does not provide this functionality.&lt;/p&gt;

&lt;p&gt;Meaning that is impossible to create a preemptive scheduler, as this would require being able to &lt;strong&gt;stop&lt;/strong&gt; a function.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cooperative Scheduler
&lt;/h4&gt;

&lt;p&gt;While we can't create a &lt;em&gt;preemptive&lt;/em&gt; scheduler using Node.js or JavaScript, because there is no way to stop a function execution from outside of it, there is one way to &lt;em&gt;cooperatively&lt;/em&gt; stop a function, and it is called &lt;strong&gt;Generator Functions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The Node.js Event Loop itself is considered to be a Cooperative Scheduler for MultiTasking, as through callbacks/promises, the user can define when to return the control to the scheduler (Event Loop).&lt;/p&gt;

&lt;p&gt;Generator Functions in JavaScript allow us to return the control to the parent function, which can choose whenever it wants to call the &lt;code&gt;next&lt;/code&gt; function to unpause the task.&lt;/p&gt;

&lt;p&gt;Let's take a look at this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CooperativeScheduler&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;running&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;completionResolver&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;running&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;completionResolver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;addTask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;taskGenerator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;taskGenerator&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;runNextTask&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;running&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;completionResolver&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nf"&gt;completionResolver&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;currentTask&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shift&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// Get first and walk right&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;currentTask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;next&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;//Execute next step&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Push next execution to the end of the queue&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;currentTask&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;setImmediate&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runNextTask&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;running&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;running&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;runNextTask&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;waitForCompletion&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;taskQueue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;running&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="c1"&gt;// If no tasks are running or pending, resolve immediately&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="err"&gt;#&lt;/span&gt;&lt;span class="nx"&gt;completionResolver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nf"&gt;cooperativeFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Task &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; started`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;yield&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Task &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; is processing...`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;yield&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Task &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;taskId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; finished!`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Using the Cooperative Scheduler&lt;/span&gt;
&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;scheduler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CooperativeScheduler&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;times&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nx"&gt;times&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addTask&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;cooperativeFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;scheduler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitForCompletion&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;})();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I start by creating a &lt;code&gt;function *cooperativeFunction(taskId)&lt;/code&gt; that is a generator function and it has 2 &lt;code&gt;yield&lt;/code&gt; operators.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;yield&lt;/code&gt; operator means: to stop the function and return to the caller.&lt;/p&gt;

&lt;p&gt;With the &lt;code&gt;CooperativeScheduler&lt;/code&gt; class, I've created a mechanism where we can add tasks, start them all and wait for the completion, then, add 10 example tasks.&lt;/p&gt;

&lt;p&gt;The scheduler &lt;em&gt;cooperatively&lt;/em&gt; pauses and switches between tasks.&lt;/p&gt;

&lt;p&gt;This is the main result after running the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node cooperative.js

Task 1 started
Task 2 started
Task 3 started
Task 4 started
Task 5 started
Task 6 started
Task 7 started
Task 8 started
Task 9 started
Task 10 started
Task 1 is processing...
Task 2 is processing...
Task 3 is processing...
Task 4 is processing...
Task 5 is processing...
Task 6 is processing...
Task 7 is processing...
Task 8 is processing...
Task 9 is processing...
Task 10 is processing...
Task 1 finished!
Task 2 finished!
Task 3 finished!
Task 4 finished!
Task 5 finished!
Task 6 finished!
Task 7 finished!
Task 8 finished!
Task 9 finished!
Task 10 finished!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's a wrap!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Throughout this blog post, we've navigated the intricate landscape of concurrency, shedding light on the distinctions between OS Threads and Virtual/Green Threads. We also delved into the realms of Preemptive and Cooperative Schedulers, exploring their unique characteristics and applications.&lt;/p&gt;

&lt;p&gt;This exploration not only highlighted the versatility and challenges of implementing different types of threading models in Node.js but also provided a glimpse into the broader world of concurrent programming.&lt;/p&gt;

&lt;p&gt;I hope this post has been enlightening and engaging, offering you valuable insights into the complexities and beauty of threading and concurrency. &lt;/p&gt;

&lt;p&gt;Thank you for joining me on this adventure!&lt;/p&gt;

</description>
      <category>concurrency</category>
      <category>javascript</category>
      <category>programming</category>
    </item>
    <item>
      <title>Under Pressure: Benchmarking Node.js on a Single-Core EC2</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Sat, 02 Dec 2023 23:48:51 +0000</pubDate>
      <link>https://dev.to/ocodista/under-pressure-benchmarking-nodejs-on-a-single-core-ec2-5ghe</link>
      <guid>https://dev.to/ocodista/under-pressure-benchmarking-nodejs-on-a-single-core-ec2-5ghe</guid>
      <description>&lt;p&gt;Hi!&lt;/p&gt;

&lt;p&gt;In this post, I'm going to &lt;em&gt;stress&lt;/em&gt; test a Node.js 21.2.0 &lt;strong&gt;pure&lt;/strong&gt; API (no framework!) to see the efficiency of the Event Loop in a limited environment.&lt;/p&gt;

&lt;p&gt;I'm using AWS for hosting the servers (EC2) and database (RDS with Postgres).&lt;/p&gt;

&lt;p&gt;The main goal is to understand how many requests per second a simple Node API can handle on a single core, then identify the bottleneck and optimize it as much as possible.&lt;/p&gt;

&lt;p&gt;Let's dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS RDS running Postgres&lt;/li&gt;
&lt;li&gt;EC2 t2.small for the API&lt;/li&gt;
&lt;li&gt;EC2 t3.micro for the load tester&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Database Setup
&lt;/h2&gt;

&lt;p&gt;The database will consist of a single &lt;code&gt;users&lt;/code&gt; table created with the following SQL query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;email&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;255&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;TRUNCATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  API Design
&lt;/h2&gt;

&lt;p&gt;The API will have a single POST endpoint that will be used to save a user to the Postgres database. I know, there are a lot of javascript frameworks out there that I could use to make the development easier, but it's possible to use only Node to handle the requests/responses. &lt;/p&gt;

&lt;p&gt;To connect to the database, I chose the library &lt;code&gt;pg&lt;/code&gt; as it is the most popular one, we'll start with it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connection Pooling
&lt;/h3&gt;

&lt;p&gt;One thing that is important when connecting to a database is using a connection pool. Without a connection pool, the API needs to open/close a connection to the database at each request, which is extremely inefficient.&lt;/p&gt;

&lt;p&gt;A pool allows the API to reuse connections, as we're planning to send a lot of concurrent requests to our API, it's &lt;strong&gt;crucial&lt;/strong&gt; to have it.&lt;/p&gt;

&lt;p&gt;To check your Postgres database's connection limit, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SHOW&lt;/span&gt; &lt;span class="n"&gt;max_connections&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my case, I'm using an RDS running on a t3.micro database with these specs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrfmlonw8tugqluybbev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrfmlonw8tugqluybbev.png" alt="AWS RDS configs" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So this is the outcome of the query:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu9y3g6xranu8fxppa2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpu9y3g6xranu8fxppa2q.png" alt="Max Connections" width="306" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cool, having 81 as the maximum number of connections to our database, we know what is the upperbound limit we should not surpass. &lt;/p&gt;

&lt;p&gt;As the API will run on a single-core processor, it's not a good idea to have a high number of connections on the connection pool, as this would cause a lot of headache to the processor (&lt;em&gt;context switching&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;Let's start with 40.&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating the API
&lt;/h3&gt;

&lt;p&gt;We'll start by starting our project with &lt;code&gt;npm init&lt;/code&gt; and creating our &lt;code&gt;index.mjs&lt;/code&gt; file. MJS so I can use EcmaScript synthax without doing too much magic/parsing/loading.&lt;/p&gt;

&lt;p&gt;The first thing I'll do is add the &lt;a href="https://www.npmjs.com/package/pg" rel="noopener noreferrer"&gt;pg library&lt;/a&gt; with &lt;code&gt;npm add pg&lt;/code&gt;. I'm using &lt;code&gt;npm&lt;/code&gt; but you can use pnpm, yarn or any other node package manager you want.&lt;/p&gt;

&lt;p&gt;Then, let's start by creating our connection pool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;pg&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Required because pg lib uses CommonJS 🤢&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Pool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pg&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_HOST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_DATABASE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Limit is 81, let's start with 40&lt;/span&gt;
  &lt;span class="na"&gt;idleTimeoutMillis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// How much time before kicking out an idle client.&lt;/span&gt;
  &lt;span class="na"&gt;connectionTimeoutMillis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// How much time to disconnect a new client, we don't want to disconnect them for now.&lt;/span&gt;
  &lt;span class="na"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="cm"&gt;/* If you're running on AWS, you'll need to use:
  ssl: {
    rejectUnauthorized: false
  }
  */&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're using process.env to access the environment variables, so create a &lt;code&gt;.env&lt;/code&gt; file on the root and fill with your postgres informations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;POSTGRES_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="nv"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;span class="nv"&gt;POSTGRES_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, let's create a function to persist our user on the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;queryText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;INSERT INTO users(email, password) VALUES($1, $2) RETURNING id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;queryText&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, let's create a node HTTP server by importing the &lt;code&gt;node:http&lt;/code&gt; package and writing a code to handle new requests, parse from string to JSON, query the database and return 201, 400 or 500 in case of any errors, the final file looks like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// index.mjs&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;node:http&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;pg&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;pg&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Pool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;pg&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Pool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_HOST&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_USER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;POSTGRES_DATABASE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;max&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;idleTimeoutMillis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;connectionTimeoutMillis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="cm"&gt;/* If you're running on AWS, you'll need to use:
  ssl: {
    rejectUnauthorized: false
  }
  */&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;createUser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;queryText&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;INSERT INTO users(email, password) VALUES($1, $2) RETURNING id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;queryText&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;getRequestBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()));&lt;/span&gt;
    &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;end&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sendResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Length&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;byteLength&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;writeHead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Connection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;keep-alive&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Default to keep-alive for persistent connections&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Cache-Control&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;no-store&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// No caching for user creation&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getRequestBody&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;createUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Location&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`/user/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;responseBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;User created&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="nf"&gt;sendResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;201&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;responseBody&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Connection&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;close&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;responseBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;statusCode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;SyntaxError&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nf"&gt;sendResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;responseBody&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text/plain&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;sendResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Not Found!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Server running on http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, after running &lt;code&gt;npm install&lt;/code&gt;, you can run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node &lt;span class="nt"&gt;--env-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.env index.mjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To start the application, you should see this on your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52y2255wsh7zj4g0xsdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52y2255wsh7zj4g0xsdi.png" alt="Server is running" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congrats, we have built a simple NodeAPI with one endpoint that connects to Postgres through a Connection Pool and inserts a new user to the users table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the API to an EC2
&lt;/h2&gt;

&lt;p&gt;First, create an AWS account and go to EC2 &amp;gt; Instances &amp;gt; Launch an Instance.&lt;/p&gt;

&lt;p&gt;Then, create an Ubuntu 64-bit (x86) t2.micro instance, allow SSH traffic and allow HTTP traffic from the Internet.&lt;/p&gt;

&lt;p&gt;Your summary should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lqpozvh5xj4b5wbpe9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6lqpozvh5xj4b5wbpe9g.png" alt="AWS EC2 t2.micro Summary" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll need to create a key-value-pair.pem file to be able to SSH into it, I won't cover this in this article, there are already plenty of tutorials teaching how to launch and connect to an EC2 instance on the internet, so find them!&lt;/p&gt;

&lt;h3&gt;
  
  
  Allowing TCP connections on port 3000
&lt;/h3&gt;

&lt;p&gt;After creation, we need to allow TCP traffic for port 3000, this is done on the Security Group config (EC2 &amp;gt; Security Groups &amp;gt; Your Security Group)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohd6cmk20fesesyexwa3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohd6cmk20fesesyexwa3.png" alt="Security Group page" width="800" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this page, click on "Edit inbound rules", then "Add rule" and fill the form as shown on the image, this will allow us to hit port 3000 of our instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1zn64ffemfe2ztisibc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1zn64ffemfe2ztisibc.png" alt="Inbound Rule" width="800" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your final Inbound Rules table should look something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0sxn5zjwn8axfugpbe5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0sxn5zjwn8axfugpbe5.png" alt="Inbound Rules" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Connecting to EC2
&lt;/h3&gt;

&lt;p&gt;Download the &lt;em&gt;.pem&lt;/em&gt; file in a folder, then access the EC2 instance and copy the public IPV4 IP, then, run this command on the same folder:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbmnnzrh64mtxxem1gax.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbmnnzrh64mtxxem1gax.png" alt="Public IPV4 address" width="800" height="295"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;path-to-pen&amp;gt; ubuntu@&amp;lt;public-ipv4-address&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you see this EC2 welcome page, then you're in 🎉&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzktjbpajt9gi8v6u1qi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzktjbpajt9gi8v6u1qi8.png" alt="EC2 Welcome page" width="800" height="787"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Installing Node
&lt;/h4&gt;

&lt;p&gt;Let's follow the &lt;a href="https://github.com/nodesource/distributions#debian-and-ubuntu-based-distributions" rel="noopener noreferrer"&gt;Node documentation for Debian/Ubuntu-based Linux distros&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; ca-certificates curl gnupg
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/apt/keyrings
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/nodesource.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NODE_MAJOR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;21
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_&lt;/span&gt;&lt;span class="nv"&gt;$NODE_MAJOR&lt;/span&gt;&lt;span class="s2"&gt;.x nodistro main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/nodesource.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; Double check that the NODE_MAJOR is 21, as we want to use the latest version of Node &amp;lt;3&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;nodejs &lt;span class="nt"&gt;-y&lt;/span&gt;
node &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's what you should see (it may differ the version as this post gets old)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lw847xq9tcjk0zux40j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lw847xq9tcjk0zux40j.png" alt="Node installed" width="782" height="100"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nice, now we have a fresh new ubuntu server with node installed, we need to transfer our API code to it and start it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying API to EC2
&lt;/h3&gt;

&lt;p&gt;We'll use a tool called &lt;em&gt;scp&lt;/em&gt; that uses ssh connection to copy file from local to a target location, in our case, the EC2 instance we just created.&lt;/p&gt;

&lt;p&gt;Steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Delete the node_modules folder from the project.&lt;/li&gt;
&lt;li&gt;Go to the parent folder of the root folder of the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my case, the name of the folder is &lt;code&gt;node-api&lt;/code&gt; (I know, very creative!)&lt;/p&gt;

&lt;p&gt;Now, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;scp &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;path-to-pem&amp;gt; &lt;span class="nt"&gt;-r&lt;/span&gt; ./node-api ubuntu@&amp;lt;public-ipv4-address&amp;gt;:/home/ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To transfer the folder &lt;code&gt;node-api&lt;/code&gt; to the /home/ubuntu/node-api folder at our EC2 instance.&lt;/p&gt;

&lt;p&gt;You should see something similar to this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwwoddu8rwhjmqm8pthu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwwoddu8rwhjmqm8pthu.png" alt="files transfered" width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Running the API on EC2
&lt;/h3&gt;

&lt;p&gt;Head back to the EC2 server using ssh and run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;node-api
npm &lt;span class="nb"&gt;install
&lt;/span&gt;&lt;span class="nv"&gt;NODE_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production node &lt;span class="nt"&gt;--env-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.env index.mjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And boom, the API is running on AWS.&lt;/p&gt;

&lt;p&gt;Let's double check that it's working by making a POST request passing email and password to the IP of our API, at the port 3000.&lt;/p&gt;

&lt;p&gt;You can use curl (on another terminal), to do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;email: user@example.com, password: password&lt;span class="o"&gt;}&lt;/span&gt; http://&amp;lt;public-ipv4-address&amp;gt;:3000/user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result should look like this:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb8wyzd3gi3kz5v53t39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb8wyzd3gi3kz5v53t39.png" alt="User Created" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm using &lt;a href="https://tableplus.com/" rel="noopener noreferrer"&gt;Table Plus&lt;/a&gt; to connect to the RDS Postgres database, you could use any Postgres Client.&lt;/p&gt;

&lt;p&gt;To ensure that the API is persisting data to the database, let's run this query:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="k"&gt;COUNT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm04ak3w0c3nkcklu0tur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm04ak3w0c3nkcklu0tur.png" alt="Returned" width="566" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It should return 1.&lt;/p&gt;

&lt;p&gt;Nice, it's working!&lt;/p&gt;

&lt;h2&gt;
  
  
  Stress Test
&lt;/h2&gt;

&lt;p&gt;Now that we have our API working, we need to be able to test how many concurrent requests it can handle with a &lt;strong&gt;single core&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;There are tons of tools to do this, I'll use &lt;a href="https://github.com/tsenart/vegeta" rel="noopener noreferrer"&gt;Vegeta&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can run the following steps from your local machine, but keep in mind that your network may be the bottleneck, as the stress test requires a lot of packages to be sent at the same time.&lt;/p&gt;

&lt;p&gt;I'll use another EC2 instance (a more powerful one, t2x.large) running Ubuntu.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Vegeta
&lt;/h3&gt;

&lt;p&gt;Follow the docs to install Vegeta on your OS.&lt;/p&gt;

&lt;p&gt;Then, create a new folder for load testers on the root folder of the application, it's looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node_benchmark/
  node-api/
  load-tester/
    vegeta/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go to the vegeta folder and create a start.sh script with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$# &lt;/span&gt;&lt;span class="nt"&gt;-ne&lt;/span&gt; 1 &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'Wrong arguments, expecting only one (reqs/s)'&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nv"&gt;TARGET_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"targets.txt"&lt;/span&gt;
&lt;span class="nv"&gt;DURATION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"30s"&lt;/span&gt;  &lt;span class="c"&gt;# Duration of the test, e.g., 60s for 60 seconds&lt;/span&gt;
&lt;span class="nv"&gt;RATE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;    &lt;span class="c"&gt;# Number of requests per second&lt;/span&gt;
&lt;span class="nv"&gt;RESULTS_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"results_&lt;/span&gt;&lt;span class="nv"&gt;$RATE&lt;/span&gt;&lt;span class="s2"&gt;.bin"&lt;/span&gt;
&lt;span class="nv"&gt;REPORT_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"report_&lt;/span&gt;&lt;span class="nv"&gt;$RATE&lt;/span&gt;&lt;span class="s2"&gt;.txt"&lt;/span&gt;
&lt;span class="nv"&gt;ENDPOINT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://&amp;lt;ipv4-public-address&amp;gt;:3000/user"&lt;/span&gt;

&lt;span class="c"&gt;# Check if Vegeta is installed&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; vegeta &amp;amp;&amp;gt; /dev/null
&lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Vegeta could not be found, please install it."&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# Create target file with unique email and password for each request&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Generating target file for Vegeta..."&lt;/span&gt;
&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;  &lt;span class="c"&gt;# Clear the file if it already exists&lt;/span&gt;

&lt;span class="c"&gt;# Assuming body.json exists and contains the correct JSON structure for the POST request&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 &lt;span class="nv"&gt;$RATE&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do 
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"POST &lt;/span&gt;&lt;span class="nv"&gt;$ENDPOINT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"@body.json"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;done

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting Vegeta attack for &lt;/span&gt;&lt;span class="nv"&gt;$DURATION&lt;/span&gt;&lt;span class="s2"&gt; at &lt;/span&gt;&lt;span class="nv"&gt;$RATE&lt;/span&gt;&lt;span class="s2"&gt; requests per second..."&lt;/span&gt;
&lt;span class="c"&gt;# Run the attack and save the results to a binary file&lt;/span&gt;
vegeta attack &lt;span class="nt"&gt;-rate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$RATE&lt;/span&gt; &lt;span class="nt"&gt;-duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$DURATION&lt;/span&gt; &lt;span class="nt"&gt;-targets&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RESULTS_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Load test finished, generating reports..."&lt;/span&gt;
&lt;span class="c"&gt;# Generate a textual report from the binary results file&lt;/span&gt;
vegeta report &lt;span class="nt"&gt;-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;text &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RESULTS_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPORT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Textual report generated: &lt;/span&gt;&lt;span class="nv"&gt;$REPORT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Generate a JSON report for further analysis&lt;/span&gt;
&lt;span class="nv"&gt;JSON_REPORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"report.json"&lt;/span&gt;
vegeta report &lt;span class="nt"&gt;-type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;json &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$RESULTS_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JSON_REPORT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"JSON report generated: &lt;/span&gt;&lt;span class="nv"&gt;$JSON_REPORT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="nv"&gt;$REPORT_FILE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt;: Replace &lt;code&gt;&amp;lt;ipv4-public-address&amp;gt;&lt;/code&gt; with the IP of your EC2 Node API Server&lt;/p&gt;

&lt;p&gt;Now, create a &lt;code&gt;body.json&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"email"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A1391FDC-2B51-4D96-ADA4-5EEE649A4A75@example.com"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"password"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"password"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you're ready to start load-testing our api.&lt;/p&gt;

&lt;p&gt;This script will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run for 30s&lt;/li&gt;
&lt;li&gt;Hit the API with concurrent requests/s defined by the first argument of the script&lt;/li&gt;
&lt;li&gt;Generate a textual and .json file with infos about the test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Last, but not least, we need to make the &lt;code&gt;start.sh&lt;/code&gt; file executable, we can do this by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x start.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before running each test, I'll clear the users table on Postgres with the following query.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;TRUNCATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will help us see how many users were created!&lt;/p&gt;

&lt;h3&gt;
  
  
  1.000 Reqs/s
&lt;/h3&gt;

&lt;p&gt;Alright, let's go to the interesting part, let's see if our single core, 1GB server can handle 1.000 requests per second.&lt;/p&gt;

&lt;p&gt;Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start.sh 1000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Wait for the completion, here it generates the following output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkq73oc72n77b956cntp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkq73oc72n77b956cntp.png" alt="1.000 reqs/s" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me break it down for you:&lt;/p&gt;

&lt;p&gt;At the rate os 1.000 requests per second, the Node API was able to successfully process all of them, returning the expected success status 201.&lt;/p&gt;

&lt;p&gt;In average, each request took 4.254 ms to be returned, with 99% of them returning in less than 25.959 ms.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests per Second&lt;/td&gt;
&lt;td&gt;1000.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Success Rate&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p99 Response Time&lt;/td&gt;
&lt;td&gt;25.959 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average Response Time&lt;/td&gt;
&lt;td&gt;4.254 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slowest Response Time&lt;/td&gt;
&lt;td&gt;131.889 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fastest Response Time&lt;/td&gt;
&lt;td&gt;2.126 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 201&lt;/td&gt;
&lt;td&gt;30000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's check our database:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmhuk66jgsrn8ka9d4sk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmhuk66jgsrn8ka9d4sk.png" alt="Database Count" width="582" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cool, it worked!&lt;/p&gt;

&lt;p&gt;Let's try harder and double the number of requests per second.&lt;/p&gt;
&lt;h3&gt;
  
  
  2.000 Requests per second
&lt;/h3&gt;

&lt;p&gt;Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start.sh 2000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check the output&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tajeemcv0spts1bjpdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3tajeemcv0spts1bjpdq.png" alt="2.000 reqs/s" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Awesome, it can handle 2.000 requests/second and still keep a 100% success rate.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests per Second&lt;/td&gt;
&lt;td&gt;2000.07&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Success Rate&lt;/td&gt;
&lt;td&gt;100.00%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p99 Response Time&lt;/td&gt;
&lt;td&gt;2.062 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average Response Time&lt;/td&gt;
&lt;td&gt;136.347 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slowest Response Time&lt;/td&gt;
&lt;td&gt;4.067 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fastest Response Time&lt;/td&gt;
&lt;td&gt;2.164 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 201&lt;/td&gt;
&lt;td&gt;60000&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A couple things to notice here, while the success rate was still 100%, the p99 jumped from 25.959ms to 2.067s (79x slower than the previous test).&lt;/p&gt;

&lt;p&gt;The average response time also jumped from 4.254ms to 136.347 (32.1x slower).&lt;/p&gt;

&lt;p&gt;So yeah, doubling the number of requests per second is making our server to suffer A LOT.&lt;/p&gt;

&lt;p&gt;Let's try harder and see what happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  3.000 Requests per second
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start.sh 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrpjhk87ti89omgsmd4l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrpjhk87ti89omgsmd4l.png" alt="3.000 reqs/s output" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For 3.000 requests/second our Node.js API started to present problems, being able to process only 52.20%, let's see what happened.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests per Second&lt;/td&gt;
&lt;td&gt;2267.72&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Success Rate&lt;/td&gt;
&lt;td&gt;52.20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p99 Response Time&lt;/td&gt;
&lt;td&gt;30.001 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average Response Time&lt;/td&gt;
&lt;td&gt;6.146 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slowest Response Time&lt;/td&gt;
&lt;td&gt;30.156 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fastest Response Time&lt;/td&gt;
&lt;td&gt;3.018 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 201&lt;/td&gt;
&lt;td&gt;36089&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 500&lt;/td&gt;
&lt;td&gt;21588&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 0&lt;/td&gt;
&lt;td&gt;11465&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u0p04jyrgu2xahsdh9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u0p04jyrgu2xahsdh9e.png" alt="Database" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For 21,588 requests, our API returned status code 500, let's check the API logs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk20ycz69qn4m32m4bii5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk20ycz69qn4m32m4bii5.png" alt="API logs" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that our Postgres connection is hitting timeout, the current connectionTimeoutMillis is configured to be 2000 (2s), let's try increasing this to 30000 and see if that improves our load test.&lt;/p&gt;

&lt;p&gt;We can do that by changing the line 13 of index.mjs from 2000 to 30000:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;connectionTimeoutMillis&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;30000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run it again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start.sh 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the result?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1le6oi97qx1lzrs20bg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu1le6oi97qx1lzrs20bg.png" alt="97.39% success" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests per Second&lt;/td&gt;
&lt;td&gt;2959.90&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Success Rate&lt;/td&gt;
&lt;td&gt;97.39%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p99 Response Time&lt;/td&gt;
&lt;td&gt;13.375 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average Response Time&lt;/td&gt;
&lt;td&gt;6.901 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slowest Response Time&lt;/td&gt;
&lt;td&gt;30.001 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fastest Response Time&lt;/td&gt;
&lt;td&gt;3.476 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 201&lt;/td&gt;
&lt;td&gt;86486&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 0&lt;/td&gt;
&lt;td&gt;2318&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Nice, by simply increasing the connection timeout for the database we improved the success rate by 45,19%, also, all the 500 errors are now completely gone!&lt;/p&gt;

&lt;p&gt;Let's take a look at the remaining errors (status code 0).&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz065105c64nqh55yozdq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz065105c64nqh55yozdq.png" alt="bind address already in use" width="800" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Status code 0 usually means that the server reset the connection because it couldn't handle more. &lt;/p&gt;

&lt;p&gt;Let's check if it's CPU, Memory or Network.&lt;/p&gt;

&lt;p&gt;At the peak of the test, the CPU is only consuming 13%, so it's not CPU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyzvvakuifqcev40fcc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyzvvakuifqcev40fcc5.png" alt="CPU" width="704" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By running it again with &lt;code&gt;htop&lt;/code&gt; I noticed that memory was up only about 70%, so that's also not the problem:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq5y9tt88d1mlxicm0qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq5y9tt88d1mlxicm0qh.png" alt="Memory" width="800" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's try something different.&lt;/p&gt;
&lt;h4&gt;
  
  
  File Descriptors
&lt;/h4&gt;

&lt;p&gt;In unix systems, each new connection (&lt;em&gt;socket&lt;/em&gt;) is assigned to a File Descriptor. By default, on Ubuntu, the maximum number of open file descriptors is 1024.&lt;/p&gt;

&lt;p&gt;You chan check that by running &lt;code&gt;ulimit -n&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwzqx3xgszt665wovdjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwzqx3xgszt665wovdjw.png" alt="limit api" width="800" height="91"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's try increasing that to 2000 and redo the test to see if we can get rid of these 2% timeout errors.&lt;/p&gt;

&lt;p&gt;To do so, I'll follow &lt;a href="https://stackoverflow.com/a/11345256" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; and change to 6000&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;vi /etc/security/limits.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzi1625w5wi632br0ys6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzi1625w5wi632br0ys6.png" alt="new limits for nofile" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;nofile = number of files.&lt;br&gt;
soft = soft limit.&lt;br&gt;
hard = hard limit.&lt;/p&gt;

&lt;p&gt;Then reboot the EC2 with &lt;code&gt;sudo reboot now&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;After logging in, we can see that the limit changed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl67nccbt72c1pzeoa9c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl67nccbt72c1pzeoa9c6.png" alt="New ulimit is 2000" width="800" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And let's redo the test:&lt;/p&gt;

&lt;p&gt;Start the API with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;NODE_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production node &lt;span class="nt"&gt;--env-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;.env index.mjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And start the load-tester with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start.sh 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check the results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxd0tku5sldik02l5n61i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxd0tku5sldik02l5n61i.png" alt="Results with 2000 open files" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Surprisingly, the results are worse!&lt;/p&gt;

&lt;p&gt;With a maximum of 2.000 open files, the node API successfully answered only 78.43% of the requests.&lt;/p&gt;

&lt;p&gt;This is because by having only one core, adding more open sockets make the processor switch between the files more often than the previous version.&lt;/p&gt;

&lt;p&gt;Let's try reducing it to 700 to see if it gets better.&lt;/p&gt;

&lt;p&gt;(I'll skip the step on how to do it because it's the same).&lt;/p&gt;

&lt;p&gt;And let's see the new output with 700 as maximum open files.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny59iy4qdahtx51wnej2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny59iy4qdahtx51wnej2.png" alt="700" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With 700 maximum open files, we hit 83.20% success rate. Let's go back to 1024 and try reducing the connection pool to 20 instead of 40.&lt;/p&gt;

&lt;p&gt;If that doesn't work, let's assume 3.000 req/s is slightly higher than the limit and we'll try to find the maximum number of requests/s that a single core node API can handle with 100% success.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvlhvq2k98wt88oafmfd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvlhvq2k98wt88oafmfd.png" alt="93%" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With 20 connections for the connection pool, the API was able to process 93.06%, proving that we probably don't need 40.&lt;/p&gt;

&lt;p&gt;Let's try with 2.600 reqs/s:&lt;/p&gt;

&lt;h3&gt;
  
  
  2.600 reqs/s
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./start.sh 2600
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that's the result:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny6othqgyw49pdzianek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny6othqgyw49pdzianek.png" alt="2600 100% success" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Requests per Second&lt;/td&gt;
&lt;td&gt;2600.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Success Rate&lt;/td&gt;
&lt;td&gt;100%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;p99 Response Time&lt;/td&gt;
&lt;td&gt;8.171 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Average Response Time&lt;/td&gt;
&lt;td&gt;4.573 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Slowest Response Time&lt;/td&gt;
&lt;td&gt;9.234 s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Fastest Response Time&lt;/td&gt;
&lt;td&gt;5.244 ms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Status Code 201&lt;/td&gt;
&lt;td&gt;77999&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That's a wrap!&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This experiment demonstrates the capabilities of a pure Node.js API on a single-core server. &lt;/p&gt;

&lt;p&gt;With a pure Node.js 21.2.0 API, using a single core with 1GB of RAM + connection pool with a maximum 20 connections, we were able to achieve 2.600 requests/s without failures.&lt;/p&gt;

&lt;p&gt;By fine-tuning parameters like connection pool size and file descriptor limits, we can significantly impact performance.&lt;/p&gt;

&lt;p&gt;What's the highest load your Node.js server has handled? Share your experiences!&lt;/p&gt;

</description>
      <category>node</category>
      <category>typescript</category>
      <category>api</category>
      <category>aws</category>
    </item>
    <item>
      <title>Profundezas do Node.js: Explorando I/O Assíncrono</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Sat, 02 Sep 2023 13:21:27 +0000</pubDate>
      <link>https://dev.to/ocodista/profundezas-do-nodejs-explorando-io-assincrono-mim</link>
      <guid>https://dev.to/ocodista/profundezas-do-nodejs-explorando-io-assincrono-mim</guid>
      <description>&lt;ul&gt;
&lt;li&gt;Introdução&lt;/li&gt;
&lt;li&gt;Como o Node trata o Código Assíncrono&lt;/li&gt;
&lt;li&gt;
Operações Assíncronas: O que São?

&lt;ul&gt;
&lt;li&gt;Operação Assíncrona Bloqueante vs Não Bloqueante&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Experimentos com Funções Bloqueantes&lt;/li&gt;

&lt;li&gt;Experimentos com Funções Não-Bloqueantes&lt;/li&gt;

&lt;li&gt;

Operações Assíncronas Não Bloqueantes e SO

&lt;ul&gt;
&lt;li&gt;
Entendendo File Descriptors

&lt;ul&gt;
&lt;li&gt;O que é FD?&lt;/li&gt;
&lt;li&gt;FD e I/O não bloqueante&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

Monitorando FDs com syscalls

&lt;ul&gt;
&lt;li&gt;Entendendo o select&lt;/li&gt;
&lt;li&gt;Epoll&lt;/li&gt;
&lt;li&gt;io_uring&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;Conclusão&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introdução
&lt;/h2&gt;

&lt;p&gt;Recentemente tenho estudado sobre execução de código assíncrono no Node.js.&lt;/p&gt;

&lt;p&gt;Acabei aprendendo (e escrevendo) bastante coisa, desde um artigo sobre &lt;a href="https://dev.to/ocodista/a-magia-do-event-loop-in1"&gt;como o Event Loop funciona&lt;/a&gt; até uma thread no Twitter explicando &lt;a href="https://twitter.com/ocodista/status/1696684507917631841?s=20" rel="noopener noreferrer"&gt;quem espera a requisição http terminar&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Se você quiser, pode também acessar o mapa mental que criei antes de escrever esse post, clicando &lt;a href="https://whimsical.com/node-async-CzpmdNE7HMzsp5uPDeJpve" rel="noopener noreferrer"&gt;aqui&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Agora, vamos ao que interessa!&lt;/p&gt;

&lt;h2&gt;
  
  
  Como o Node trata o Código Assíncrono
&lt;/h2&gt;

&lt;p&gt;No node:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Todo código JavaScript é executado na thread principal.&lt;/li&gt;
&lt;li&gt;A biblioteca &lt;strong&gt;libuv&lt;/strong&gt; é encarregada de lidar com operações de I/O (In/Out), ou seja, operações &lt;strong&gt;assíncronas&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Por padrão, o libuv disponibiliza 4 &lt;em&gt;worker threads&lt;/em&gt; para o Node.js

&lt;ul&gt;
&lt;li&gt;Essas threads só serão utilizadas quando operações assíncronas &lt;strong&gt;bloqueantes&lt;/strong&gt; forem realizadas, nesse caso, bloquearão uma das threads do libuv (que são threads do Sistema Operacional) ao invés da thread principal (de execução do Node).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Existem operações bloqueantes e não bloqueantes, a maioria das operações assíncronas atuais são &lt;strong&gt;não bloqueantes&lt;/strong&gt;.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Operações Assíncronas: O que São?
&lt;/h2&gt;

&lt;p&gt;Geralmente, existe uma confusão quando se trata de operações assíncronas. &lt;/p&gt;

&lt;p&gt;Muitos acreditam que significa que algo ocorre em segundo plano, em paralelo, ao mesmo tempo ou em uma outra thread. &lt;/p&gt;

&lt;p&gt;Na verdade, uma operação assíncrona é uma operação que não retornará agora, mas sim depois.&lt;/p&gt;

&lt;p&gt;Elas dependem de uma comunicação com agentes externos e, esses agentes, podem não ter uma resposta imediata para sua solicitação. &lt;/p&gt;

&lt;p&gt;Estamos falando de operações de I/O (entrada/saída).&lt;/p&gt;

&lt;p&gt;Exemplos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Leitura de arquivo&lt;/strong&gt;: dados &lt;em&gt;saem&lt;/em&gt; do disco e &lt;em&gt;entram&lt;/em&gt; na aplicação.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Escrita em um arquivo&lt;/strong&gt;: dados &lt;em&gt;saem&lt;/em&gt; da aplicação e &lt;em&gt;entram&lt;/em&gt; no disco.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operações de Rede&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Requisições HTTP, por exemplo.&lt;/li&gt;
&lt;li&gt;A aplicação &lt;strong&gt;envia&lt;/strong&gt; uma &lt;em&gt;requisição http&lt;/em&gt; para algum servidor e &lt;strong&gt;recebe&lt;/strong&gt; os dados.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk07hggcijhz3vjlypq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk07hggcijhz3vjlypq0.png" alt="Node chama libuv, libuv chama syscalls, event loop roda na thread principal" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Operação Assíncrona Bloqueante vs Não Bloqueante
&lt;/h3&gt;

&lt;p&gt;No mundo moderno, &lt;del&gt;as pessoas não se falam&lt;/del&gt; a maioria das operações assíncronas não bloqueiam.&lt;/p&gt;

&lt;p&gt;Mas peraí, isso quer dizer que:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a libuv disponibiliza 4 threads (&lt;em&gt;por padrão&lt;/em&gt;).&lt;/li&gt;
&lt;li&gt;elas "cuidam" das operações de I/O &lt;strong&gt;bloqueantes&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;a grande maioria das operações são &lt;strong&gt;não-bloqueantes&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Parece meio inútil né? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbilhvv9nqkvfco5jrzdk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbilhvv9nqkvfco5jrzdk.png" alt="Libuv worker threads ops assíncronas bloqueantes" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Com esse questionamento em mente, resolvi fazer alguns experimentos.&lt;/p&gt;

&lt;h2&gt;
  
  
  Experimentos com Funções Bloqueantes
&lt;/h2&gt;

&lt;p&gt;Primeiro, testei uma função assíncrona de uso intenso de CPU, uma das &lt;strong&gt;raras&lt;/strong&gt; funções assíncronas &lt;strong&gt;bloqueantes&lt;/strong&gt; do Node.&lt;/p&gt;

&lt;p&gt;O código utilizado foi o seguinte:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// index.js&lt;/span&gt;
&lt;span class="c1"&gt;// index.js&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;pbkdf2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;TEN_MILLIONS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="nx"&gt;e7&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Função assíncrona de uso intenso de CPU&lt;/span&gt;
&lt;span class="c1"&gt;// Objetivo: Bloquear uma worker thread&lt;/span&gt;
&lt;span class="c1"&gt;// Objetivo original: Gerar uma palavra-chave&lt;/span&gt;
&lt;span class="c1"&gt;// O terceiro parâmetro é o número de iterações&lt;/span&gt;
&lt;span class="c1"&gt;// Nesse exemplo, estamos passando 10 milhões&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;runSlowCryptoFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;pbkdf2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;secret&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;salt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;TEN_MILLIONS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sha512&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;callback&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Aqui queremos saber quantas workers threads a libuv vai usar&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Thread pool size is &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runAsyncBlockingOperations&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;runAsyncBlockingOperation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;runIndex&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;runSlowCryptoFunction&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Finished run &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;runIndex&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; in &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nf"&gt;runAsyncBlockingOperation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;runAsyncBlockingOperation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nf"&gt;runAsyncBlockingOperations&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Para validar o funcionamento, eu rodei o comando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;IMPORTANTE:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UV_THREADPOOL_SIZE: É uma variável de ambiente que determina quantas &lt;em&gt;worker threads&lt;/em&gt; da libuv o Node vai iniciar.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O resultado foi:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Thread pool size is 1
Finished run 1 &lt;span class="k"&gt;in &lt;/span&gt;3.063s
Finished run 2 &lt;span class="k"&gt;in &lt;/span&gt;6.094s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ou seja, com 1 única thread, cada execução levou ~3 segundos e elas ocorreram de forma sequencial. Uma após a outra.&lt;/p&gt;

&lt;p&gt;Agora, resolvi fazer o seguinte teste:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 node index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E o resultado foi o seguinte:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Thread pool size is 2
Finished run 2 &lt;span class="k"&gt;in &lt;/span&gt;3.225s
Finished run 1 &lt;span class="k"&gt;in &lt;/span&gt;3.243s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Com isso, está provado que as &lt;em&gt;Worker Threads&lt;/em&gt; da LIBUV, no Node.js lidam com operações assíncronas bloqueantes.&lt;/p&gt;

&lt;p&gt;Mas e as &lt;strong&gt;não bloqueantes&lt;/strong&gt;? Se ninguém espera por elas, como elas funcionam?&lt;/p&gt;

&lt;p&gt;Eu resolvi escrever uma outra função para fazer o teste.&lt;/p&gt;

&lt;h3&gt;
  
  
  Experimentos com Funções Não-Bloqueantes
&lt;/h3&gt;

&lt;p&gt;A função &lt;em&gt;fetch&lt;/em&gt; (nativa do Node) realiza uma operação assíncrona de rede e ela é não-bloqueante.&lt;/p&gt;

&lt;p&gt;Com o seguinte código, refiz o teste do primeiro experimento:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;//non-blocking.js&lt;/span&gt;
&lt;span class="c1"&gt;// Aqui queremos saber quantas workers threads a libuv vai usar&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Thread pool size is &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://www.google.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Fetch 1 retornou em &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://www.google.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="nx"&gt;startDate&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Fetch 2 retornou em &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;ms&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;s`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E executei o script com o seguinte comando:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1 node non-blocking.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;O resultado foi o seguinte:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Thread pool size is 1
Fetch 1 retornou em 0.391s
Fetch 2 retornou em 0.396s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Então, resolvi testar com duas threads, para ver se mudava algo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;UV_THREADPOOL_SIZE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 node non-blocking.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E então:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Thread pool size is 2
Fetch 2 retornou em 0.402s
Fetch 1 retornou em 0.407s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Com isso, pude observar que:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ter mais threads rodando na LIBUV não ajuda na execução de operações assíncronas não bloqueantes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Mas então, voltei a me questionar, se nenhuma thread da libuv fica "esperando" a requisição voltar, como é que isso funciona?&lt;/p&gt;

&lt;p&gt;Meu amigo, foi aí que eu caí num gigantesco buraco de pesquisa e conhecimentos sobre o funcionamento de:&lt;/p&gt;

&lt;h3&gt;
  
  
  Operações Assíncronas Não Bloqueantes e SO
&lt;/h3&gt;

&lt;p&gt;O Sistema Operacional evoluiu bastante com o passar dos anos para conseguir lidar com operações de I/O de forma não bloqueante, isso é feito através de &lt;em&gt;syscalls&lt;/em&gt;, são elas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;select/poll&lt;/strong&gt;: Estas são as formas tradicionais de lidar com I/O não bloqueante e são geralmente consideradas menos eficientes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IOCP&lt;/strong&gt;: Usado no Windows para operações assíncronas.
kqueue: Um método para MacOS e BSD.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;epoll&lt;/strong&gt;: Eficiente e utilizado no Linux. Ao contrário de select, ele não é limitado pelo número de FDs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;io_uring&lt;/strong&gt;: Uma evolução do epoll, trazendo melhorias de desempenho e uma abordagem baseada em filas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Para entendermos melhor, vamos precisar mergulhar nos detalhes das operações de I/O não bloqueante.&lt;/p&gt;

&lt;h3&gt;
  
  
  Entendendo File Descriptors
&lt;/h3&gt;

&lt;p&gt;Para conseguir explicar I/O não bloqueante, preciso explicar rapidamente o conceito de File Descriptors (FDs). &lt;/p&gt;

&lt;h4&gt;
  
  
  O que é FD?
&lt;/h4&gt;

&lt;p&gt;É um índice numérico de uma tabela mantida pelo &lt;em&gt;kernel&lt;/em&gt;, onde cada registro possui:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tipo do recurso (como arquivo, soquete, dispositivo).&lt;/li&gt;
&lt;li&gt;Posição atual do ponteiro do arquivo.&lt;/li&gt;
&lt;li&gt;Permissões e flags, definindo modos como leitura ou escrita.&lt;/li&gt;
&lt;li&gt;Referência à estrutura de dados do recurso no kernel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eles são fundamentais para o gerenciamento de I/O.&lt;/p&gt;

&lt;h4&gt;
  
  
  FD e I/O não bloqueante
&lt;/h4&gt;

&lt;p&gt;Ao iniciar operação de I/O não bloqueante, o Linux atrela um FD a ela sem interromper (bloquear) a execução do processo.&lt;/p&gt;

&lt;p&gt;Por exemplo:&lt;/p&gt;

&lt;p&gt;Vamos imaginar que você quer ler o conteúdo de um arquivo muito grande. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Abordagem bloqueante&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Processo chama a função &lt;em&gt;ler arquivo&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Processo aguarda o SO ler o conteúdo do arquivo

&lt;ul&gt;
&lt;li&gt;Enquanto o SO não terminar, o processo está &lt;strong&gt;bloqueado&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Abordagem não bloqueante&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Processo solicita leitura &lt;em&gt;assíncrona&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;O SO começa a ler o conteúdo e retorna FD para o processo.&lt;/li&gt;
&lt;li&gt;Processo não está travado e pode fazer outras coisas.&lt;/li&gt;
&lt;li&gt;De tempos em tempos, o processo chama uma &lt;em&gt;syscall&lt;/em&gt; para saber se a leitura acabou.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quem define o modo como a leitura será feita é o processo, através da função &lt;a href="https://man7.org/linux/man-pages/man2/fcntl.2.html" rel="noopener noreferrer"&gt;fcntl&lt;/a&gt; com a flag &lt;em&gt;O_NONBLOCK&lt;/em&gt;, mas isso é secundário no momento.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitorando FDs com syscalls
&lt;/h3&gt;

&lt;p&gt;Para observar múltiplos FDs de maneira eficiente, os SOs contam com algumas &lt;em&gt;syscalls&lt;/em&gt;:&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Entendendo o select:&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Recebe uma lista de FDs.&lt;/li&gt;
&lt;li&gt;Bloqueia o processo até que um ou mais FDs estejam prontos para a operação especificada (leitura, escrita, exceção).&lt;/li&gt;
&lt;li&gt;Após o retorno da syscall, o programa pode iterar sobre os FDs para identificar os que estão prontos para I/O.&lt;/li&gt;
&lt;li&gt;Utiliza algoritmo de busca que é O(n).

&lt;ul&gt;
&lt;li&gt;Ineficiente, lento, cansado com muitos FDs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Epoll&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Foi uma evolução do &lt;em&gt;select&lt;/em&gt;, utiliza uma árvore auto-balanceada para armazenar os FDs, fazendo com que o tempo de acesso seja praticamente constance, O(1).&lt;/p&gt;

&lt;p&gt;Chique demais!&lt;/p&gt;

&lt;p&gt;Como funciona:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cria-se uma instância do epoll através de &lt;code&gt;epoll_create&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Associa os FDs a essa instância com &lt;code&gt;epoll_ctl&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Usa &lt;code&gt;epoll_wait&lt;/code&gt; para aguardar atividade em algum dos FDs.&lt;/li&gt;
&lt;li&gt;Possui parâmetro de timeout.

&lt;ul&gt;
&lt;li&gt;Extremamento importante e muito bem utilizado pelo &lt;em&gt;Event Loop&lt;/em&gt; da libuv!&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysxrok3wm49fk52xlgwv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysxrok3wm49fk52xlgwv.png" alt="Comparação de tempo entre select e epoll" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Io_uring&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Esse cara aqui veio para chutar o pau da barraca.&lt;/p&gt;

&lt;p&gt;Enquanto o &lt;strong&gt;epoll&lt;/strong&gt; evoluiu (e muito!) o desempenho de busca &lt;em&gt;e apreensão&lt;/em&gt; dos FDs, o io_uring veio para repensar toda a natureza das operações de I/O.&lt;/p&gt;

&lt;p&gt;E assim, depois de entender como ele funciona, fiquei me perguntando como ninguém pensou nisso antes!!!&lt;/p&gt;

&lt;p&gt;Recapitulando:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;select&lt;/strong&gt;: Recebe uma lista de FDs, armazena-os sequencialmente (como um array) e verifica 1 a 1 (complexidade O(n)) para ver quem teve alteração ou atividade.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;epoll&lt;/strong&gt;: Recebe uma lista de FDs, armazena-os utilizando uma árvore auto-balanceada, não verifica 1 a 1, é mais eficiente, e faz o mesmo que o &lt;strong&gt;select&lt;/strong&gt; só que com complexidade O(1)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Historicamente, o processo ficava encarregado de iterar sobre os FDs retornados para saber quem terminou ou não.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;io_uring&lt;/strong&gt;: Como é que é? Retornar uma lista? Fazer polling? Cês são burros? Já ouviram falar de &lt;strong&gt;filas&lt;/strong&gt;?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ele funciona utilizando duas filas principais, na forma de anéis (rings, daí o nome io-&lt;strong&gt;ring&lt;/strong&gt;).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 para submeter tarefas&lt;/li&gt;
&lt;li&gt;1 para tarefas concluídas&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Simples né?&lt;/p&gt;

&lt;p&gt;O processo, ao iniciar uma operação de I/O, &lt;strong&gt;enfileira&lt;/strong&gt; a operação utilizando a estrutura &lt;em&gt;io_uring&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Aí, ao invés de chamar &lt;em&gt;select&lt;/em&gt; ou &lt;em&gt;epoll&lt;/em&gt; e, com os FDs retornados ficar iterando sobre cada um deles, o processo pode optar por ser notificado quando alguma operação de I/O acabar.&lt;/p&gt;

&lt;p&gt;Polling? Não. Filas!&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusão
&lt;/h3&gt;

&lt;p&gt;Com isso, agora eu sei exatamente qual é o caminho que o Node percorre para realizar uma operação assíncrona.&lt;/p&gt;

&lt;p&gt;Se é bloqueante:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executa a operação assíncrona utilizando a libuv&lt;/li&gt;
&lt;li&gt;Adiciona a uma &lt;em&gt;worker thread&lt;/em&gt; da libuv&lt;/li&gt;
&lt;li&gt;A &lt;em&gt;worker thread&lt;/em&gt; fica bloqueada, esperando a operação terminar.&lt;/li&gt;
&lt;li&gt;Ao terminar, a thread se encarrega de colocar o resultado no &lt;em&gt;Event Loop&lt;/em&gt; na fila de MacroTasks&lt;/li&gt;
&lt;li&gt;O callback é executado na thread principal&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Se não é bloqueante:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executa a operação assíncrona utilizando a libuv&lt;/li&gt;
&lt;li&gt;Libuv executa uma &lt;em&gt;syscall&lt;/em&gt; de I/O não bloqueante&lt;/li&gt;
&lt;li&gt;Faz &lt;em&gt;polling&lt;/em&gt; com os FDs até que se resolvam (epoll)&lt;/li&gt;
&lt;li&gt;A partir da versão 20.3.0 utiliza io_uring

&lt;ul&gt;
&lt;li&gt;Abordagem de filas de submissão/operações completadas&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Ao receber evento de operação completada

&lt;ul&gt;
&lt;li&gt;libuv se encarrega de executar o callback na thread principal&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>node</category>
      <category>linux</category>
      <category>async</category>
      <category>libuv</category>
    </item>
    <item>
      <title>O Futuro do Trabalho: Explorando a Cultura Async First.</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Sun, 27 Aug 2023 18:16:46 +0000</pubDate>
      <link>https://dev.to/ocodista/async-first-trabalho-remoto-vida-pessoal--1b75</link>
      <guid>https://dev.to/ocodista/async-first-trabalho-remoto-vida-pessoal--1b75</guid>
      <description>&lt;p&gt;Já está provado que o trabalho 100% remoto aumenta a qualidade de vida dos trabalhadores.&lt;/p&gt;

&lt;p&gt;Seja pela economia do não-deslocamento, a maior proximidade com a família, maior customização do espaço ou simplesmente pelo maior conforto de, assim como Cid Moreira, poder trabalhar de bermuda.&lt;/p&gt;

&lt;p&gt;Porém, junto com a popularização do trabalho remoto, veio o aumento das reuniões.&lt;/p&gt;

&lt;p&gt;Reuniões diárias, semanais, de planejamento, estimativas, revisões, debates, atualizações, team building, 1-1, all hands e de mais muitos outros tipos.&lt;/p&gt;

&lt;p&gt;Reuniões que, a meu ver, tentam replicar o modelo de trabalho existente há décadas no escritório, só que dessa vez, no mundo digital.&lt;/p&gt;

&lt;p&gt;Nesse post, pretendo explicar um pouco sobre a cultura &lt;em&gt;Async First&lt;/em&gt;, que quebra um pouco o paradigma clássico do modelo de trabalho e propõe uma abordagem mais moderna que, além de fazer muito mais sentido para o mundo digital, estimula a independência do trabalhador e fornece mais liberdade/flexibilidade para o seu dia a dia.&lt;/p&gt;

&lt;h2&gt;
  
  
  O que é Async First?
&lt;/h2&gt;

&lt;p&gt;Async First, ou "assíncrono em primeiro lugar", é uma abordagem que &lt;strong&gt;prioriza a comunicação assíncrona&lt;/strong&gt; ao invés de interações em tempo real. &lt;/p&gt;

&lt;p&gt;Mas o que é comunicação assíncrona? &lt;/p&gt;

&lt;p&gt;Vou começar com exemplos:&lt;/p&gt;

&lt;h3&gt;
  
  
  Comunicação síncrona
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Conversa cara-a-cara&lt;/li&gt;
&lt;li&gt;Vídeo-chamadas&lt;/li&gt;
&lt;li&gt;Chamadas telefônicas&lt;/li&gt;
&lt;li&gt;Bate-papo/Chat de mensagens instantâneas&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Comunicação assíncrona
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Emails&lt;/li&gt;
&lt;li&gt;Gravações de tela&lt;/li&gt;
&lt;li&gt;Vídeos explicativos&lt;/li&gt;
&lt;li&gt;Mensagens de áudio gravadas&lt;/li&gt;
&lt;li&gt;Comentários em documentos compartilhados (como Google Docs ou Microsoft Word online).&lt;/li&gt;
&lt;li&gt;Postagens em fóruns ou painéis de discussão.&lt;/li&gt;
&lt;li&gt;Tarefas ou comentários em sistemas de gerenciamento de projetos (como Trello, Asana, etc.).&lt;/li&gt;
&lt;li&gt;SMS ou mensagens de texto (quando não há expectativa de resposta imediata).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Isso significa que, ao invés de todos estarem online ao mesmo tempo para uma reunião ou discussão, as informações são compartilhadas de uma forma que permite às pessoas acessá-las e respondê-las em seu próprio tempo.&lt;/p&gt;

&lt;p&gt;Com essa abordagem, não existe a necessidade de duas pessoas estarem online no mesmo tempo. Com ela, seria possível realizar o seu trabalho às 08 da manhã, às 02 da tarde ou em qualquer outro horário que você preferir.&lt;/p&gt;

&lt;p&gt;Quando o assunto é desenvolvimento de software, a grande maioria das tarefas poderia ser feita de maneira &lt;strong&gt;assíncrona&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Será que isso funciona? Bem, essa é a lista de empresas que adotaram a cultura &lt;em&gt;Async First&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://about.gitlab.com/jobs/" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.atlassian.com/company/careers" rel="noopener noreferrer"&gt;Atlassian&lt;/a&gt; (Dona do Jira, Trello, Confluence e Bitbucket)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://help.gumroad.com/article/284-jobs-at-gumroad" rel="noopener noreferrer"&gt;Gumroad&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://zapier.com/jobs" rel="noopener noreferrer"&gt;Zapier&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://toggl.com/jobs/" rel="noopener noreferrer"&gt;Toggl&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Benefícios do Async First
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Maior Flexibilidade:&lt;/strong&gt; Cada membro da equipe tem a liberdade de organizar sua agenda e decidir quando será mais produtivo, em vez de estar preso a um horário fixo de reuniões ou interações.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foco Profundo:&lt;/strong&gt; Sem a necessidade constante de interrupções para reuniões em tempo real, os trabalhadores podem mergulhar profundamente em suas tarefas e aumentar sua produtividade.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inclusividade Global:&lt;/strong&gt; Equipes distribuídas ao redor do mundo não precisam se preocupar com fusos horários conflitantes, tornando mais fácil e incentivando as contratações globais.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redução da Fatiga Digital:&lt;/strong&gt; Reuniões são cansativas, menos reuniões = menos cansaço.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentação Eficiente:&lt;/strong&gt; A comunicação assíncrona muitas vezes é documentada, o que é muito útil, pois todos podem revisitar informações sempre que necessário. Atualmente, se ninguém anota nada durante uma reunião, a equipe depende 100% da memória humana para guardar informações.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Como garantir uma cultura eficiente
&lt;/h3&gt;

&lt;p&gt;É importante utilizar boas ferramentas de documentação, manter uma &lt;em&gt;wiki&lt;/em&gt; atualizada e organizada para facilitar o acesso de toda informação relevante à cultura da empresa e projetos.&lt;/p&gt;

&lt;p&gt;Além disso, é crucial estabelecer um equilíbrio, pois algumas discussões e decisões podem se beneficiar de interações em tempo real.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Async First&lt;/em&gt;, ou "Comunicação assíncrona por padrão" é diferente de &lt;em&gt;Sync Never&lt;/em&gt; ou "Nunca ter reunião". &lt;/p&gt;

&lt;p&gt;Uma equipe ainda pode fazer reuniões, mas incentivando a comunicação assíncrona primeiro, a frequência, duração e necessidade de reuniões diminuirão ao ponto de não serem mais um incômodo.&lt;/p&gt;

&lt;p&gt;Em resumo, enquanto o trabalho remoto se tornou um novo padrão para muitas empresas, a &lt;strong&gt;abordagem &lt;em&gt;Async First&lt;/em&gt; traz uma poderosa evolução!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Propõe um caminho que valoriza a autonomia do trabalhador e reconhece o potencial do mundo digital e globalizado.&lt;/p&gt;

&lt;p&gt;O mundo mudou muito nos últimos 30 anos, mas os modelos de trabalho não acompanharam.&lt;/p&gt;

&lt;p&gt;Eu sou à favor da cultura &lt;em&gt;Async First&lt;/em&gt;, espero que se torne cada vez mais popular.&lt;/p&gt;

&lt;p&gt;E você, o que acha? Fique à vontade para deixar sua opinião, ponto de vista ou crítica nos comentários, acredito que promover esse debate é crucial para repensarmos os modelos de trabalho atuais.&lt;/p&gt;

&lt;p&gt;Valeu!&lt;/p&gt;

</description>
      <category>remote</category>
      <category>career</category>
      <category>wfh</category>
      <category>trabalhoremoto</category>
    </item>
    <item>
      <title>JavaScript Event Loop: Breaking Down the Mystery</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Thu, 03 Aug 2023 16:40:38 +0000</pubDate>
      <link>https://dev.to/ocodista/javascript-event-loop-breaking-down-the-mystery-2c9f</link>
      <guid>https://dev.to/ocodista/javascript-event-loop-breaking-down-the-mystery-2c9f</guid>
      <description>&lt;p&gt;What happens when the following code is executed in Node.js?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If your answer was different from:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Perhaps you don't fully understand the execution order of JavaScript and the operation of the Event Loop.&lt;/p&gt;

&lt;p&gt;No worries, I'll try to explain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First of all&lt;/strong&gt;, if you have doubts about what is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;JavaScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ECMAScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;JavaScript &lt;em&gt;Runtime&lt;/em&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I recommend that you read the &lt;strong&gt;glossary&lt;/strong&gt; before continuing.&lt;/p&gt;

&lt;p&gt;Now, let's go, I will explain what happens at each stage of the execution of this JavaScript code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Thread
&lt;/h2&gt;

&lt;p&gt;Node interprets the JavaScript file from top to bottom, line by line, in a single thread.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running setTimeout()
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;main thread&lt;/em&gt; will interpret the first instruction, add it to the Call Stack, where it will be executed and removed from the &lt;em&gt;Call Stack&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8ry8ueju3rhnvuvnwnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8ry8ueju3rhnvuvnwnq.png" alt="Visualization of the Main Thread executing the first function call: setTimeout(() =&amp;gt; console.log(1), 10)" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;setTimeout&lt;/code&gt; instruction is used to schedule the execution of a function after certain milliseconds. &lt;/p&gt;

&lt;p&gt;This function is part of the &lt;a href="https://libuv.org/" rel="noopener noreferrer"&gt;&lt;code&gt;libuv&lt;/code&gt;&lt;/a&gt; library, which Node uses to create a &lt;strong&gt;Timer&lt;/strong&gt; without blocking the main &lt;em&gt;thread&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrcidhlqrcjp8xk4swtc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrcidhlqrcjp8xk4swtc.png" alt="The main thread executes the setTimeout function, which starts a timer in a new thread, through a library called libuv. At the end of the timer, the callback will be added to the macro-task queue" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After starting the &lt;em&gt;Timer&lt;/em&gt;, the main &lt;em&gt;thread&lt;/em&gt; will remove the instruction from the &lt;em&gt;Call Stack&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitlk0d6813hhs7d4uyum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitlk0d6813hhs7d4uyum.png" alt="Main Thread pops from call stack" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the end of the interval, the timer will add the &lt;em&gt;callback&lt;/em&gt; of the &lt;em&gt;setTimeout&lt;/em&gt; function to the &lt;strong&gt;macro-task&lt;/strong&gt; queue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Promise.resolve().then()
&lt;/h3&gt;

&lt;p&gt;While the &lt;em&gt;Timer&lt;/em&gt; of the &lt;em&gt;libuv&lt;/em&gt; library waits for the 10ms, the &lt;em&gt;Main thread&lt;/em&gt; will interpret the next line of the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yphjc4rv34lmr0y3h84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yphjc4rv34lmr0y3h84.png" alt="Main thread consumes the next instruction from the call stack" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The instruction this time is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main &lt;em&gt;thread&lt;/em&gt; will execute the function &lt;a href="https://developer.mozilla.org/pt-BR/docs/Web/JavaScript/Reference/Global_Objects/Promise" rel="noopener noreferrer"&gt;Promise&lt;/a&gt;.resolve().then()&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Promise&lt;/em&gt; is an object that represents a completion or failure of an asynchronous operation.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By calling the resolve() function without any parameter, we are declaring &lt;em&gt;Promise&lt;/em&gt; that does not return any value, but that's okay.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusel8ojsgldntndmdw1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusel8ojsgldntndmdw1z.png" alt="Executing the function Promise.resolve()" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For now, we are more interested in the behavior of the &lt;em&gt;.then&lt;/em&gt; function of a &lt;em&gt;Promise&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;By passing &lt;code&gt;() =&amp;gt; console.log(2)&lt;/code&gt; as callback for our &lt;em&gt;Promise&lt;/em&gt;, we are telling Node to execute this code as soon as the &lt;em&gt;Promise&lt;/em&gt; is successfully finished.&lt;/p&gt;

&lt;p&gt;In other words, we are saying that, as soon as the &lt;a href="https://developer.mozilla.org/pt-BR/docs/Web/JavaScript/Reference/Global_Objects/Promise/resolve" rel="noopener noreferrer"&gt;resolve()&lt;/a&gt; method of the &lt;em&gt;Promise&lt;/em&gt; is executed, Node should execute our &lt;em&gt;console.log(2)&lt;/em&gt; instruction.&lt;/p&gt;

&lt;p&gt;But, that's not exactly how it works.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Every &lt;em&gt;Promise&lt;/em&gt; &lt;em&gt;callback&lt;/em&gt; is sent &lt;strong&gt;instantly&lt;/strong&gt; to a special queue called &lt;strong&gt;Micro Tasks Queue&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tx6tt1l0mt6yd31vdqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tx6tt1l0mt6yd31vdqe.png" alt="Pushes Promise callback to MicroTasks queue" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Recapping
&lt;/h3&gt;

&lt;p&gt;This is the current state of the script execution:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1l4p3kq8lcvzxshxwgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1l4p3kq8lcvzxshxwgk.png" alt="Current state of the script execution" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything that happened so far, surely, took less than 10 milliseconds, which is why the &lt;strong&gt;Timer&lt;/strong&gt; has not yet added the instruction of &lt;code&gt;console.log(1)&lt;/code&gt; to the &lt;em&gt;Macro Tasks Queue&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;But, by using &lt;em&gt;libuv&lt;/em&gt;, the &lt;em&gt;Main thread&lt;/em&gt; can continue working normally, in a &lt;strong&gt;non-blocking&lt;/strong&gt; manner.&lt;/p&gt;

&lt;p&gt;Ok, you might be wondering:&lt;/p&gt;

&lt;h3&gt;
  
  
  Event Loop
&lt;/h3&gt;

&lt;p&gt;Throughout this process, with each interpretation of a new line from the file, the &lt;em&gt;Event Loop&lt;/em&gt; performed a very important, albeit repetitive function.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check if the &lt;em&gt;Call Stack&lt;/em&gt; was empty.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdff0sbaxt7gnbqe2i0ch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdff0sbaxt7gnbqe2i0ch.png" alt="Event Loop asking if the Call Stack is empty" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the answer was always: &lt;strong&gt;NO!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At no time during the execution of this script was the &lt;em&gt;Call Stack&lt;/em&gt; empty, so our friend &lt;em&gt;Event Loop&lt;/em&gt; will keep waiting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Emptying the &lt;em&gt;Call Stack&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Now, the &lt;em&gt;Main Thread&lt;/em&gt; interprets the last instruction of the file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7gq0vg8ceov0ni2zmj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7gq0vg8ceov0ni2zmj8.png" alt="Main Thread consumes the last call from the call stack" width="555" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is a simple instruction, which displays a value on the &lt;em&gt;console&lt;/em&gt;, its result is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And, for the first time, the &lt;em&gt;Call Stack&lt;/em&gt; is empty!&lt;/p&gt;

&lt;h3&gt;
  
  
  Event Loop
&lt;/h3&gt;

&lt;p&gt;Now, the most awaited moment for the &lt;em&gt;Event Loop&lt;/em&gt;, the moment when it has the power to act!&lt;/p&gt;

&lt;p&gt;It will only validate the other queues when the &lt;em&gt;Call Stack&lt;/em&gt; is empty!&lt;/p&gt;

&lt;p&gt;At each loop, it will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Process all tasks in the &lt;em&gt;Micro Tasks&lt;/em&gt; queue

&lt;ul&gt;
&lt;li&gt;Adding them to the &lt;em&gt;Call Stack&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Process 1 task from the &lt;em&gt;Macro Tasks&lt;/em&gt; queue

&lt;ul&gt;
&lt;li&gt;Adding it to the &lt;em&gt;Call Stack&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Wait for the &lt;em&gt;Call Stack&lt;/em&gt; to empty&lt;/li&gt;

&lt;li&gt;Repeat&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The &lt;em&gt;Main Thread&lt;/em&gt; executes every instruction in the main context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now, continuing the execution of the example code:&lt;/p&gt;

&lt;h4&gt;
  
  
  Micro Tasks
&lt;/h4&gt;

&lt;p&gt;When the &lt;em&gt;Call Stack&lt;/em&gt; becomes empty, it means that the &lt;em&gt;Main Thread&lt;/em&gt; is not executing anything.&lt;/p&gt;

&lt;p&gt;Then, the &lt;em&gt;Event Loop&lt;/em&gt; consumes all tasks from the &lt;em&gt;Micro Tasks Queue&lt;/em&gt; and adds them to the &lt;em&gt;Call Stack&lt;/em&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lyjf1z0m4ohlbsg6ou7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lyjf1z0m4ohlbsg6ou7.png" alt="Event Loop consuming function from the micro tasks queue" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, the &lt;em&gt;Main Thread&lt;/em&gt; consumes the instruction from the &lt;em&gt;Call Stack&lt;/em&gt; and executes it.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs02h5a9oejxr7ub701c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs02h5a9oejxr7ub701c.png" alt="Main Thread consuming call stack" width="485" height="214"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Writes 2 to the console&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the &lt;em&gt;Call Stack&lt;/em&gt; becomes empty again.&lt;/p&gt;

&lt;p&gt;Then, the &lt;em&gt;Event Loop&lt;/em&gt; looks for more tasks in the &lt;em&gt;Micro Tasks&lt;/em&gt; queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztyrlifqa2zdf8izsqg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztyrlifqa2zdf8izsqg1.png" alt="Current state of the application" width="641" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As it is empty, it finishes its work in the &lt;em&gt;Micro Tasks Queue&lt;/em&gt; and starts consuming the &lt;em&gt;Macro Tasks Queue&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Macro Tasks
&lt;/h4&gt;

&lt;p&gt;Now, suppose that the 10-millisecond interval has passed and the &lt;em&gt;Timer&lt;/em&gt; has inserted the console.log(1) function into the &lt;em&gt;Macro Tasks&lt;/em&gt; queue, the &lt;em&gt;Event Loop&lt;/em&gt; will transfer 1 instruction from the &lt;em&gt;Macro Tasks Queue&lt;/em&gt; to the &lt;em&gt;Call Stack&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm31nxba5ktgh0ia781k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm31nxba5ktgh0ia781k.png" alt="Event Loop consuming Macro Tasks queue" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, the &lt;em&gt;Main Thread&lt;/em&gt; consumes the last instruction from the &lt;em&gt;Call Stack&lt;/em&gt; and executes it.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7sqoyvdrk692lybb9im.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7sqoyvdrk692lybb9im.png" alt="Main Thread consuming the Call Stack" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Writes 1 to the console&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Important point:&lt;/strong&gt; If there were still instructions in the Micro Tasks queue, these would be processed. But, as everything is empty, the program execution is heading towards the end.&lt;/p&gt;

&lt;p&gt;That's why the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Will result in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;We've reached the end - Arlindo Cruz&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now you understand what happens behind the scenes of JavaScript. The &lt;strong&gt;Event Loop&lt;/strong&gt; manages the queues of &lt;em&gt;micro&lt;/em&gt; and &lt;em&gt;macro&lt;/em&gt; tasks and, with that, ensures that asynchronous instructions are executed harmoniously in the context of the main thread.&lt;/p&gt;

&lt;p&gt;Understanding how it works helps us write more efficient codes and better predict the behavior of our applications.&lt;/p&gt;

&lt;p&gt;Next time you're writing JavaScript code, I hope you remember everything that happens behind the scenes of the &lt;strong&gt;Event Loop&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;See you later!&lt;/p&gt;

&lt;h2&gt;
  
  
  Glossary
&lt;/h2&gt;

&lt;h3&gt;
  
  
  JavaScript
&lt;/h3&gt;

&lt;p&gt;It's a high-level, dynamic, interpreted programming language that supports multiple programming paradigms (functional, imperative, object-oriented).&lt;/p&gt;

&lt;p&gt;It's a "medium" of conversation between something you want to do and what the computer executes. &lt;/p&gt;

&lt;h3&gt;
  
  
  ECMAScript
&lt;/h3&gt;

&lt;p&gt;It's a &lt;a href="https://tc39.es/ecma262/2023/" rel="noopener noreferrer"&gt;set of rules&lt;/a&gt; that defines how JavaScript should work, it defines the language standards (syntax, data types, control structures, and operators), and JavaScript is the implementation of these standards.&lt;/p&gt;

&lt;p&gt;If you want to understand better, read &lt;a href="https://hcode.com.br/blog/o-que-e-ecmascript-e-o-mesmo-que-javascript" rel="noopener noreferrer"&gt;this article&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  JavaScript Runtime
&lt;/h3&gt;

&lt;p&gt;It's the &lt;strong&gt;engine&lt;/strong&gt; that executes JavaScript code.&lt;/p&gt;

&lt;p&gt;When writing JavaScript code, you write instructions (which follow the rules defined by ECMAScript), but to execute these instructions, you need a &lt;em&gt;Runtime&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;It's as if JavaScript were a recipe and the &lt;em&gt;Runtime&lt;/em&gt; was a cook who executes the recipe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node&lt;/a&gt;, &lt;a href="https://v8.dev/" rel="noopener noreferrer"&gt;V8&lt;/a&gt; and &lt;a href="https://firefox-source-docs.mozilla.org/js/index.html" rel="noopener noreferrer"&gt;SpiderMonkey&lt;/a&gt; are the most well-known JavaScript &lt;em&gt;runtimes&lt;/em&gt; in the world.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>softwareengineering</category>
      <category>programming</category>
    </item>
    <item>
      <title>A Magia do Event Loop</title>
      <dc:creator>Caio Borghi</dc:creator>
      <pubDate>Wed, 02 Aug 2023 00:32:31 +0000</pubDate>
      <link>https://dev.to/ocodista/a-magia-do-event-loop-in1</link>
      <guid>https://dev.to/ocodista/a-magia-do-event-loop-in1</guid>
      <description>&lt;p&gt;O que acontece quando o seguinte código é executado no Node.js?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Se a sua resposta foi diferente de:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Talvez você não entenda muito bem a ordem de execução do JavaScript e o funcionamento do Event Loop.&lt;/p&gt;

&lt;p&gt;Sem problemas, vou tentar explicar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Antes de tudo&lt;/strong&gt;, se você tem dúvidas sobre o que é:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;JavaScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ECMAScript&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&lt;em&gt;Runtime&lt;/em&gt; de JavaScript&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eu recomendo que você leia o &lt;strong&gt;glossário&lt;/strong&gt; antes de continuar.&lt;/p&gt;

&lt;p&gt;Agora, vamos lá, vou explicar o que acontece em cada etapa da execução desse código JavaScript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Thread
&lt;/h2&gt;

&lt;p&gt;O Node interpreta o arquivo JavaScript de cima para baixo, linha por linha, em uma única thread.&lt;/p&gt;

&lt;h3&gt;
  
  
  Executando o setTimeout()
&lt;/h3&gt;

&lt;p&gt;A &lt;em&gt;main thread&lt;/em&gt; interpretará a primeira instrução, adicionará na Call Stack, onde será executada e removida da &lt;em&gt;Call Stack&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8ry8ueju3rhnvuvnwnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb8ry8ueju3rhnvuvnwnq.png" alt="Visualização da Main Thread executando a primeira chamada de função: setTimeout(() =&amp;gt; console.log(1), 10)" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A instrução &lt;code&gt;setTimeout&lt;/code&gt; serve para agendar a execução de uma função após determinados millisegundos. &lt;/p&gt;

&lt;p&gt;Essa função faz parte da biblioteca &lt;a href="https://libuv.org/" rel="noopener noreferrer"&gt;&lt;code&gt;libuv&lt;/code&gt;&lt;/a&gt;, que o Node utiliza para criar um &lt;strong&gt;Timer&lt;/strong&gt; sem bloquear a &lt;em&gt;thread&lt;/em&gt; principal. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrcidhlqrcjp8xk4swtc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmrcidhlqrcjp8xk4swtc.png" alt="A main thread executa a função setTimeout, que inicia um cronômetro em uma nova thread, através de uma biblioteca chamada libuv. Ao final do cronometro, o callback será adicionado à fila de macro-tarefas" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Após iniciar o &lt;em&gt;Timer&lt;/em&gt;, a &lt;em&gt;thread&lt;/em&gt; principal removerá a instrução da &lt;em&gt;Call Stack&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitlk0d6813hhs7d4uyum.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitlk0d6813hhs7d4uyum.png" alt="Main Thread pops from call stack" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ao final do intervalo, o timer vai adicionar o &lt;em&gt;callback&lt;/em&gt; da função &lt;em&gt;setTimeout&lt;/em&gt; na fila de &lt;strong&gt;macro-tarefas&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Executando Promise.resolve().then()
&lt;/h3&gt;

&lt;p&gt;Enquanto o &lt;em&gt;Timer&lt;/em&gt; da biblioteca &lt;em&gt;libuv&lt;/em&gt; espera os 10ms, a &lt;em&gt;Main thread&lt;/em&gt; interpretará a próxima linha do arquivo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yphjc4rv34lmr0y3h84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2yphjc4rv34lmr0y3h84.png" alt="Main thread consome a próxima instrução da call stack" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A instrução da vez é a&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;em&gt;thread&lt;/em&gt; principal vai executar a função &lt;a href="https://developer.mozilla.org/pt-BR/docs/Web/JavaScript/Reference/Global_Objects/Promise" rel="noopener noreferrer"&gt;Promise&lt;/a&gt;.resolve().then()&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Promise&lt;/em&gt; é um objeto que representa uma conclusão ou falha de uma operação assíncrona.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Ao chamar a função resolve() sem nenhum parâmetro, estamos declarando &lt;em&gt;Promise&lt;/em&gt; que não retorna valor algum, mas tudo bem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusel8ojsgldntndmdw1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fusel8ojsgldntndmdw1z.png" alt="Executando a função Promise.resolve()" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Por ora, estamos mais interessados no comportamento da função &lt;em&gt;.then&lt;/em&gt; de uma &lt;em&gt;Promise&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Ao passar &lt;code&gt;() =&amp;gt; console.log(2)&lt;/code&gt; como calback para nossa &lt;em&gt;Promise&lt;/em&gt;, estamos dizendo para o Node executar este código assim que a &lt;em&gt;Promise&lt;/em&gt; for finalizada com sucesso.&lt;/p&gt;

&lt;p&gt;Ou seja, estamos dizendo que, assim que o método &lt;a href="https://developer.mozilla.org/pt-BR/docs/Web/JavaScript/Reference/Global_Objects/Promise/resolve" rel="noopener noreferrer"&gt;resolve()&lt;/a&gt; da &lt;em&gt;Promise&lt;/em&gt; for executado, o Node deverá executar nossa instrução &lt;em&gt;console.log(2)&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Mas, não é bem assim que funciona.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Todo &lt;em&gt;callback&lt;/em&gt; de &lt;em&gt;Promise&lt;/em&gt; é enviado &lt;strong&gt;instantaneamente&lt;/strong&gt; para uma fila especial chamada &lt;strong&gt;Micro Tasks Queue&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tx6tt1l0mt6yd31vdqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tx6tt1l0mt6yd31vdqe.png" alt="Pushes Promise callback to MicroTasks queue" width="800" height="589"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Recapitulando
&lt;/h3&gt;

&lt;p&gt;Esse é o estado atual da execução do script:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1l4p3kq8lcvzxshxwgk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1l4p3kq8lcvzxshxwgk.png" alt="Estado atual da execução do script" width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tudo que aconteceu até agora, com certeza, levou menos de 10 millissegundos, por isso que o &lt;strong&gt;Timer&lt;/strong&gt; ainda não adicionou a instrução de &lt;code&gt;console.log(1)&lt;/code&gt; na &lt;em&gt;Macro Tasks Queue&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Mas, por utilizar a &lt;em&gt;libuv&lt;/em&gt;, a &lt;em&gt;Main thread&lt;/em&gt; pode continuar trabalhando normalmente, de maneira &lt;strong&gt;não-bloqueante&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ok, você pode estar se perguntando:&lt;/p&gt;

&lt;h3&gt;
  
  
  Event Loop
&lt;/h3&gt;

&lt;p&gt;Durante todo esse processo, a cada interpretação de nova linha do arquivo, o &lt;em&gt;Event Loop&lt;/em&gt; realizou uma função muito importante, embora repetitiva.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verificar se a &lt;em&gt;Call Stack&lt;/em&gt; estava vazia.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdff0sbaxt7gnbqe2i0ch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdff0sbaxt7gnbqe2i0ch.png" alt="Event Loop perguntando se a Call Stack está vazia" width="800" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Como você pode observar, a resposta foi sempre: &lt;strong&gt;NÃO!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Em nenhum momento durante a execução desse script a &lt;em&gt;Call Stack&lt;/em&gt; ficou vazia, então, nosso amigo &lt;em&gt;Event Loop&lt;/em&gt; continuará esperando.&lt;/p&gt;

&lt;h3&gt;
  
  
  Esvaziando a &lt;em&gt;Call Stack&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Agora, a &lt;em&gt;Main Thread&lt;/em&gt; interpreta a última instrução do arquivo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7gq0vg8ceov0ni2zmj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7gq0vg8ceov0ni2zmj8.png" alt="Main Thread consome a última chamada da pilha de chamadas" width="555" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Essa é uma instrução simples, que exibe um valor no &lt;em&gt;console&lt;/em&gt;, seu resultado é:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E, pela primeira vez, a &lt;em&gt;Call Stack&lt;/em&gt; fica vazia!&lt;/p&gt;

&lt;h3&gt;
  
  
  Event Loop
&lt;/h3&gt;

&lt;p&gt;Agora sim, o momento mais esperado pelo &lt;em&gt;Event Loop&lt;/em&gt;, o momento que ele tem o poder de agir!&lt;/p&gt;

&lt;p&gt;Ele só vai validar as outras filas quando a &lt;em&gt;Call Stack&lt;/em&gt; estiver vazia!&lt;/p&gt;

&lt;p&gt;A cada loop, ele vai:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Processar todas as tarefas da fila de &lt;em&gt;Micro Tasks&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;Adicionando-as à &lt;em&gt;Call Stack&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Processar 1 tarefa da fila de &lt;em&gt;Macro Tasks&lt;/em&gt;

&lt;ul&gt;
&lt;li&gt;Adicionando-a à &lt;em&gt;Call Stack&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Esperar a &lt;em&gt;Call Stack&lt;/em&gt; esvaziar&lt;/li&gt;

&lt;li&gt;Repetir&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A &lt;em&gt;Main Thread&lt;/em&gt; executa toda instrução no contexto principal.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Agora, continuando a execução do código de exemplo:&lt;/p&gt;

&lt;h4&gt;
  
  
  Micro Tasks
&lt;/h4&gt;

&lt;p&gt;Quando a &lt;em&gt;Call Stack&lt;/em&gt; fica vazia, significa que a &lt;em&gt;Main Thread&lt;/em&gt; não está executando nada.&lt;/p&gt;

&lt;p&gt;Então, o &lt;em&gt;Event Loop&lt;/em&gt; consome todas as tarefa da &lt;em&gt;Micro Tasks Queue&lt;/em&gt; e adiciona à &lt;em&gt;Call Stack&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lyjf1z0m4ohlbsg6ou7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lyjf1z0m4ohlbsg6ou7.png" alt="Event Loop consumindo a função da fila de micro tarefas" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Em seguida, a &lt;em&gt;Main Thread&lt;/em&gt; consome a instrução da &lt;em&gt;Call Stack&lt;/em&gt; e a executa.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs02h5a9oejxr7ub701c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzs02h5a9oejxr7ub701c.png" alt="Main Thread consumindo call stack" width="485" height="214"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Escreve 2 no console&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Agora, a &lt;em&gt;Call Stack&lt;/em&gt; volta a ficar vazia.&lt;/p&gt;

&lt;p&gt;Então, o &lt;em&gt;Event Loop&lt;/em&gt; busca por mais tarefas na fila de &lt;em&gt;Micro Tasks&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztyrlifqa2zdf8izsqg1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztyrlifqa2zdf8izsqg1.png" alt="Estado atual da aplicação" width="641" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Como está vazia, ele finaliza o seu trabalho na &lt;em&gt;Micro Tasks Queue&lt;/em&gt; e vai começar a consumir a &lt;em&gt;Macro Tasks Queue&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Macro Tasks
&lt;/h4&gt;

&lt;p&gt;Agora, supondo que o intervalo de 10 millisegundos já tenha passado e o &lt;em&gt;Timer&lt;/em&gt; tenha inserido a função de console.log(1) na fila de &lt;em&gt;Macro Tasks&lt;/em&gt;, o &lt;em&gt;Event Loop&lt;/em&gt; transferirá 1 instrução da &lt;em&gt;Macro Tasks Queue&lt;/em&gt; para a &lt;em&gt;Call Stack&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm31nxba5ktgh0ia781k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm31nxba5ktgh0ia781k.png" alt="Event Loop consumindo fila de Macro Tasks" width="800" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Então, a &lt;em&gt;Main Thread&lt;/em&gt; consome a última instrução da &lt;em&gt;Call Stack&lt;/em&gt; e a executa.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7sqoyvdrk692lybb9im.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7sqoyvdrk692lybb9im.png" alt="Main Thread consumindo a Call Stack" width="800" height="402"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// Escreve 1 no console&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Ponto importante:&lt;/strong&gt; Se ainda houvesse instruções na fila de Micro Tasks, estas seriam processadas. Mas, como tudo está vazio, a execução do programa caminha para o fim.&lt;/p&gt;

&lt;p&gt;É por isso que o código:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Resultará em:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="mi"&gt;3&lt;/span&gt;
&lt;span class="mi"&gt;2&lt;/span&gt;
&lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Chegamos ao fim - Arlindo Cruz&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Agora você entende o que acontece nos bastidores do JavaScript. O &lt;strong&gt;Event Loop&lt;/strong&gt; gerencia as filas de &lt;em&gt;micro&lt;/em&gt; e &lt;em&gt;macro&lt;/em&gt; tarefas e, com isso, garante que instruções assíncronas sejam executadas com harmonia no contexto da &lt;em&gt;thread&lt;/em&gt; principal.&lt;/p&gt;

&lt;p&gt;Entender seu funcionamento nos ajuda a escrever códigos mais eficientes e a prever melhor o comportamento de nossas aplicações.&lt;/p&gt;

&lt;p&gt;Da próxima vez que for escrever código em JavaScript, espero que se lembre de tudo que acontece nos bastidores do &lt;strong&gt;Event Loop&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Até mais!&lt;/p&gt;

&lt;h2&gt;
  
  
  Glossário
&lt;/h2&gt;

&lt;h3&gt;
  
  
  JavaScript
&lt;/h3&gt;

&lt;p&gt;É uma linguagem de programação de alto nível, dinâmica, interpretada, que suporta múltiplos paradigmas de programação (funcional, imperativo, orientado a objetos). &lt;/p&gt;

&lt;p&gt;É um "meio" de conversa entre algo que você quer fazer e que o computador executa. &lt;/p&gt;

&lt;h3&gt;
  
  
  ECMAScript
&lt;/h3&gt;

&lt;p&gt;É um &lt;a href="https://tc39.es/ecma262/2023/" rel="noopener noreferrer"&gt;conjunto de regras&lt;/a&gt; que define como o JavaScript deve funcionar, ela define os padrões da linguagem (sintaxe, tipos de dados, estruturas de controle e operadores), e o JavaScript é a implementação desses padrões.&lt;/p&gt;

&lt;p&gt;Se quiser entender melhor, leia &lt;a href="https://hcode.com.br/blog/o-que-e-ecmascript-e-o-mesmo-que-javascript" rel="noopener noreferrer"&gt;esse artigo&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Runtime JavaScript
&lt;/h3&gt;

&lt;p&gt;É o &lt;strong&gt;motor&lt;/strong&gt; que executa o código JavaScript.&lt;/p&gt;

&lt;p&gt;Ao escrever código JavaScript, você escreve instruções (que seguem as regras definidas pelo ECMAScript), mas para executar essas instruções, você precisa de um &lt;em&gt;Runtime&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;É como se o JavaScript fosse uma receita e o &lt;em&gt;Runtime&lt;/em&gt; fosse um cozinheiro que executa a receita.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node&lt;/a&gt;, &lt;a href="https://v8.dev/" rel="noopener noreferrer"&gt;V8&lt;/a&gt; e &lt;a href="https://firefox-source-docs.mozilla.org/js/index.html" rel="noopener noreferrer"&gt;SpiderMonkey&lt;/a&gt; são os &lt;em&gt;runtimes&lt;/em&gt; JavaScript mais conhecidos do mundo.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
