<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Lucas Weis Polesello</title>
    <description>The latest articles on DEV Community by Lucas Weis Polesello (@lukas8219).</description>
    <link>https://dev.to/lukas8219</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/lukas8219"/>
    <language>en</language>
    <item>
      <title>The Zalgo Effect and Resource Leakage - A Case</title>
      <dc:creator>Lucas Weis Polesello</dc:creator>
      <pubDate>Sun, 17 Mar 2024 23:29:58 +0000</pubDate>
      <link>https://dev.to/lukas8219/the-zalgo-effect-and-resource-leakage-a-case-4epm</link>
      <guid>https://dev.to/lukas8219/the-zalgo-effect-and-resource-leakage-a-case-4epm</guid>
      <description>&lt;h2&gt;
  
  
  First of all, what's the &lt;code&gt;Zalgo Effect&lt;/code&gt;?
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Zalgo Effect&lt;/code&gt; is a term used to describe unexpected outcomes of mixing &lt;code&gt;sync&lt;/code&gt; and &lt;code&gt;async&lt;/code&gt; JavaScript code - meaning if you mix these two approaches &lt;em&gt;~SOMETHING~&lt;/em&gt; weird will happen.&lt;br&gt;
It's one of those things you kinda &lt;em&gt;don't understand&lt;/em&gt; until you see it in real production systems.&lt;/p&gt;

&lt;p&gt;Well, in the internet and Pop culture - it's more than that. Think of it like a monster.&lt;/p&gt;

&lt;h2&gt;
  
  
  So what it has to do with Resource Leakage?
&lt;/h2&gt;

&lt;p&gt;Occasionally, our SRE team received a couple PagerDuty alerts claiming our services were restarting and not able to work properly due to &lt;code&gt;Error: No channels left to allocate&lt;/code&gt; - ie RabbitMQ connections were maxing out in channel allocation. (For RabbitMQ reference into Channels and Connections)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk74ctn4kztkcn2sfo0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk74ctn4kztkcn2sfo0s.png" alt="No Channels Error" width="800" height="131"&gt;&lt;/a&gt;&lt;br&gt;
(Ofc the screenshot is just an old print)&lt;/p&gt;

&lt;p&gt;It was clear some code was leaking channel creations. No one knew what could potentially be - but God I listened somewhere about this &lt;code&gt;Zalgo Effect&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This "somewhere" is called &lt;code&gt;Node.js Design Patterns: Design and Implement Production-grade Node.js Applications Using Proven Patterns and Techniques&lt;/code&gt; - Packet book.&lt;/p&gt;

&lt;h2&gt;
  
  
  How was I so sure this "Zalgo" was the culprit?
&lt;/h2&gt;

&lt;p&gt;The service throwing that error was only responsible to fan out a couple messages for a lot of other services - so it was easy as creating a our internal &lt;code&gt;Queue&lt;/code&gt; object and running N promises concurrently to publish some message - as the first thing the script run.&lt;br&gt;
The RabbitMQ Management UI showed me that we created one channel for each promise - roughly.&lt;/p&gt;

&lt;h2&gt;
  
  
  But why it only happened in some scenarios?
&lt;/h2&gt;

&lt;p&gt;That's where the &lt;code&gt;Zalgo Effect&lt;/code&gt; pops in.&lt;/p&gt;

&lt;p&gt;The PubSub code was built back in ~2015 - Node 4 - where the callback style was the go-to. Our Engineers created the abstraction &lt;code&gt;Queue&lt;/code&gt; which dealt with almost 50% of our Event-Driven Architecture by itself. It was a very naive implementation - pointing to a single RabbitMQ Node.&lt;/p&gt;

&lt;p&gt;Our legacy code was 100% usage of &lt;code&gt;async.series&lt;/code&gt; - so when publishing messages it would be always sequentially. Later then parallel processing was added, issues started to appear more frequently. It was always just an silent issue.&lt;/p&gt;

&lt;p&gt;So the code assumed the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Assert the exchange, queues and necessary resources - using one channel - which we could call &lt;code&gt;consumeChannel&lt;/code&gt;.

&lt;ol&gt;
&lt;li&gt;The consume channel is created whenever the connection is made - and it is always created.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;Our &lt;code&gt;confirmChannel&lt;/code&gt; - ie the channel we used to &lt;code&gt;publish&lt;/code&gt; events was lazily created - only being created when we need to publish some message via that connection.

&lt;ol&gt;
&lt;li&gt;This code mixed &lt;code&gt;async&lt;/code&gt; and &lt;code&gt;sync&lt;/code&gt; code heavily&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Imagine the following:&lt;br&gt;
We call &lt;code&gt;assertConfirmChannel&lt;/code&gt; - which calls the &lt;code&gt;AmqpLibConfirmChannelSingleton&lt;/code&gt; and then it creates the channel. If the Singleton, still has no inner-instance to refer, it calls a promise to create the &lt;code&gt;confirmChannel&lt;/code&gt; and attaches the error listeners. Code example below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcxho8rvdfdaxo1t7i1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcxho8rvdfdaxo1t7i1g.png" alt="Confirm1" width="800" height="90"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18pebidl2y472d2utl3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18pebidl2y472d2utl3f.png" alt="Confirm2" width="738" height="126"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws63ah4wquuonxxqw58o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws63ah4wquuonxxqw58o.png" alt="Confirm3" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What happens if two concurrent promises reaches the &lt;code&gt;AmqpConfirmChannelSingleton.getInstance&lt;/code&gt; at the same time - in the same event-loop tick?
&lt;/h3&gt;

&lt;p&gt;Code would reach this line from &lt;code&gt;amqpLibChannelDecorator.createConfirmChannel&lt;/code&gt; - N times where the &lt;code&gt;Promise&lt;/code&gt; is not resolved yet.&lt;br&gt;
It means, we would have N promises created - mistakenly creating useless channels and de-referencing them. &lt;/p&gt;

&lt;p&gt;The NodeJS GC won't kill this - since the channel is just an abstraction and still has "reference" inside the connection.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This&lt;/em&gt; is where the code was &lt;em&gt;leaking&lt;/em&gt; channels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing the problem
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The hotfix
&lt;/h3&gt;

&lt;p&gt;Due to &lt;em&gt;production&lt;/em&gt; hours - our hot-fix was to simply await the first promise and then fan out the other promises.&lt;br&gt;
The real fix - below - was shipped with the PubSub refactoring.&lt;/p&gt;

&lt;h3&gt;
  
  
  How did we fix this in the long term?
&lt;/h3&gt;

&lt;p&gt;If you want a real solution, here's what the V2 looked like - the idea is to create Promises and assign variables with them, instead of doing &lt;code&gt;await&lt;/code&gt; on it. Example as below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notice how &lt;code&gt;this.connection&lt;/code&gt; has an &lt;code&gt;unawaited&lt;/code&gt; promise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkars02iwdeh5m7ecd34.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkars02iwdeh5m7ecd34.png" alt="Promises1" width="800" height="184"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This &lt;em&gt;easily&lt;/em&gt; fixes the problem - by setting a variable as promise and checking its existence.&lt;/p&gt;

&lt;p&gt;A more robust style example - where you actually need to initialize a couple of resources, you could do something like below - which is the exact same idea.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu05m9ovw0znk2grn091.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feu05m9ovw0znk2grn091.png" alt="Promises2" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a function to execute the entire Promise.&lt;/li&gt;
&lt;li&gt;Set up some reference to it&lt;/li&gt;
&lt;li&gt;If requested the same, just use the same Promise.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Ok - but why it fixes the problem?
&lt;/h2&gt;

&lt;p&gt;NodeJS runs upon the famous Event-loop.&lt;/p&gt;

&lt;p&gt;TL;DR: It is an ever-running &lt;code&gt;while(true)&lt;/code&gt; with some well-defined and ordered steps from a queue of functions to execute.&lt;/p&gt;

&lt;p&gt;Basically, what happens in this example is - too much synchronous code executing without ever reaching the &lt;code&gt;Poll&lt;/code&gt;  phase - where the IO callbacks are executed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thanks for reading it and leave a comment!
&lt;/h3&gt;

</description>
    </item>
    <item>
      <title>Redis is more than a Cache #1 - Delaying Jobs</title>
      <dc:creator>Lucas Weis Polesello</dc:creator>
      <pubDate>Fri, 15 Mar 2024 23:18:06 +0000</pubDate>
      <link>https://dev.to/lukas8219/redis-is-more-than-a-cache-1-delaying-jobs-139</link>
      <guid>https://dev.to/lukas8219/redis-is-more-than-a-cache-1-delaying-jobs-139</guid>
      <description>&lt;p&gt;My current company - &lt;a href="https://www.lumahealth.io/"&gt;Luma Health Inc&lt;/a&gt; - has an &lt;code&gt;Event-Driven Architecture&lt;/code&gt; where all of our backend systems interact via async messaging/jobs. Thus our backbone is sustained by an AMQP broker - RabbitMQ - which routes the jobs to interested services.&lt;/p&gt;

&lt;p&gt;Since our jobs are very critical - we cannot support failures AND should design to make the system more resilient - because well...we don't want a patient not being notified of their appointment, appointments not being created when they should, patients showing off into facilities where they were never notified the patient had something scheduled.&lt;/p&gt;

&lt;p&gt;Besides the infra and product reliability - some use cases could need &lt;code&gt;postponing&lt;/code&gt; - maybe reaching out to an external system who's offline/or not responding. Maybe some error which needs a retry - who knows? &lt;/p&gt;

&lt;p&gt;The fact is, delaying/retrying its a very frequent requirement into Event Driven Architectures. With this a service responsible for doing it was created - and it worked fine.&lt;/p&gt;

&lt;p&gt;But - as the company sold bigger contracts and grew up in scale - this system was almost stressed out and not reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Unreliable Design
&lt;/h2&gt;

&lt;p&gt;Before giving the symptoms, let's talk about the organism itself - the service old design.&lt;/p&gt;

&lt;p&gt;The design was really straightforward - if our service handlers asked for a postpone OR we failed to send the message to RabbitMQ - we just insert the JSON object from the Job into a Redis &lt;code&gt;Sorted Set&lt;/code&gt; and using the &lt;code&gt;Score&lt;/code&gt; as the timestamp which it was meant to be retried/published again.&lt;/p&gt;

&lt;p&gt;To publish back into RabbitMQ the postponed messages, a job would be triggered each 5 seconds - doing the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Read from a &lt;code&gt;set&lt;/code&gt; key containing all the existing &lt;code&gt;sorted set&lt;/code&gt; keys - basically the queue name&lt;/li&gt;
&lt;li&gt;Fetch run a &lt;code&gt;zrangebyscore&lt;/code&gt; from 0 to current timestamp BUT &lt;code&gt;limit&lt;/code&gt; to 5K jobs.&lt;/li&gt;
&lt;li&gt;Publish the job and remove it from &lt;code&gt;sorted set&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Issues
&lt;/h2&gt;

&lt;p&gt;This solution actually scaled up until 1-2 years ago when we started having issues with it - the main one's being:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It could not catch up to a huge backlog of delayed messages&lt;/li&gt;
&lt;li&gt;It would eventually OOM or SPIKE up to 40GB of memory

&lt;ol&gt;
&lt;li&gt;Due to things being fetched into memory AND some instability OR even internal logic could shovel too much data into Redis - the service just died 💀&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;We could not scale horizontally - due to consuming and fetching objects into memory before deleting them.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The solution
&lt;/h2&gt;

&lt;p&gt;The solution was very simple: we implemented something that I liked to call &lt;code&gt;streaming approach&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Using the same data structure, we are now:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Running a &lt;code&gt;zcount&lt;/code&gt; from 0 to current timestamp

&lt;ul&gt;
&lt;li&gt;Counting the amount of Jobs -&amp;gt; returning N&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Creating an &lt;code&gt;Async Iterator&lt;/code&gt; for N times - that used the &lt;code&gt;zpopmin&lt;/code&gt; method from Redis

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;zpopmin&lt;/code&gt; basically returns AND removes the least score object - ie most recent timestamp&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The &lt;code&gt;processor&lt;/code&gt; for the SortedSet&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5ntqt6ztmx2o4l68vzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5ntqt6ztmx2o4l68vzz.png" alt="Processor" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Async Iterator&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8v24but6t63t1qhyvrs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8v24but6t63t1qhyvrs.png" alt="Async Iterator" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's &lt;em&gt;all&lt;/em&gt;!&lt;/p&gt;

&lt;p&gt;This simple algorithm change annihilated the need for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Big In Memory fetches - makes our memory allocation big&lt;/li&gt;
&lt;li&gt;Limit of 5K in fetches - makes our throughput lower&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;I think the screenshots can speak for themselves but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We processed the entire backlog of 40GB of pending jobs pretty quickly&lt;/li&gt;
&lt;li&gt;From a constant usage of ~8GB - we dropped down to ~200MB&lt;/li&gt;
&lt;li&gt;We are now - trying to be play safe and still oversize - safely allocating 1/4 of the resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Money-wise: We are talking at least of 1K USD/month AND more in the future if we can lower our Rediscache instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wsttrpkjcwhuyzbpt03.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0wsttrpkjcwhuyzbpt03.png" alt="Memory Usage Service" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbyqngwltyd7ekmz1z0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbyqngwltyd7ekmz1z0n.png" alt="Memory Usage Redis" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62khw4or8k8dzvubbivk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62khw4or8k8dzvubbivk.png" alt="Mem Alloc lowering PR" width="268" height="215"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Note
&lt;/h3&gt;

&lt;p&gt;We currently have more enhancements in the roadmap - such as making the job delaying via RPC, using different storages for different postpone amount (1milli, 1 second, 1 day, 1 week++) and making it more reliable overall.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>NodeJS - Lazy Init Patterns</title>
      <dc:creator>Lucas Weis Polesello</dc:creator>
      <pubDate>Sun, 24 Dec 2023 21:22:51 +0000</pubDate>
      <link>https://dev.to/lukas8219/nodejs-lazy-init-patterns-4hap</link>
      <guid>https://dev.to/lukas8219/nodejs-lazy-init-patterns-4hap</guid>
      <description>&lt;p&gt;One of the main challenges when dealing w/ the async nature of NodeJS is initializing classes/clients that requires some sort of side effect - such as database connection, disk reads or whatsoever. Even the simple idea of waiting for the first use-case to connect/initialize a resource.&lt;/p&gt;

&lt;p&gt;Besides Dependency Injection - I like to use two approaches for this:&lt;/p&gt;

&lt;p&gt;1) Leaving it up to the client to call &lt;code&gt;connect&lt;/code&gt; or any other synonym - easy as creating an &lt;code&gt;async function&lt;/code&gt; as the example below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;redis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;//PROS: Damn easy, simple and straight-forward&lt;/span&gt;

&lt;span class="c1"&gt;//CONS: This leaves the entire responsibility to the client&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;DistributedDataStructure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;staffName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reviewId&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
        &lt;span class="c1"&gt;//Do some business here - idk,&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;accountName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sAdd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`v1:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;accountName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:pending-reviews`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reviewId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DistributedDataStructure&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;ds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Jerome&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;})()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2) Proxying the access&lt;/p&gt;

&lt;p&gt;In the real and wild-world we know that we have to deal w/ legacy code, legacy initialization methods and much more unexpected stuff - for this we have a second use-case which leverages the &lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy"&gt;Proxy API for JS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using Proxy it would look poorly-like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;redis&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;once&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;events&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;//PROS: No client responsibility - makes it easy for the client&lt;/span&gt;
&lt;span class="c1"&gt;//CONS: More complex and error prone&lt;/span&gt;
&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ProxiedDistributedDataStructure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;redis&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;connect&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Proxy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;property&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
                &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;property&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isReady&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(){&lt;/span&gt;
                    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;once&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ready&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;target&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;arguments&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;staffName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reviewId&lt;/span&gt;&lt;span class="p"&gt;){&lt;/span&gt;
        &lt;span class="c1"&gt;//Do some business here - idk - like below&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;accountName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;staffName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sAdd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`v1:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;accountName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:pending-reviews`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reviewId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ProxiedDistributedDataStructure&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Jerome&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;12&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hex&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main benefit for the second approach is that we can instantiate the objects in &lt;code&gt;sync&lt;/code&gt; contexts and only treat the method calls as &lt;code&gt;async&lt;/code&gt;  -  instead of needing to play around some dirty gimmicks to call &lt;code&gt;connect&lt;/code&gt; and chain promises - even worse, callbackifying.&lt;/p&gt;

&lt;p&gt;Dev.to Code Examples&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas8219/devto-examples/blob/master/lazy-init/lazy-init-1.js"&gt;First Approach&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas8219/devto-examples/blob/master/lazy-init/lazy-init-2.js"&gt;Second Approach&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: AFAIC from Redis V3^ we have an option &lt;code&gt;legacyMode&lt;/code&gt; whenever creating the client which we can keep this lazy nature of Redis - doing client buffering of calls.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Brazilian "Rinha de Backend" challenge #2 - The Improvement</title>
      <dc:creator>Lucas Weis Polesello</dc:creator>
      <pubDate>Wed, 30 Aug 2023 12:27:22 +0000</pubDate>
      <link>https://dev.to/lukas8219/brazilian-rinha-de-backend-challenge-2-the-improvement-1a9h</link>
      <guid>https://dev.to/lukas8219/brazilian-rinha-de-backend-challenge-2-the-improvement-1a9h</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Part #2 of a small series &lt;code&gt;Brazilian "Rinha de Backend" challenge&lt;/code&gt;. Click &lt;a href="https://dev.to/lukas8219/brazilian-rinha-de-backend-challenge-1-how-to-fail-13n4"&gt;here&lt;/a&gt; to part 1&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;15th of August, 7AM.&lt;/p&gt;

&lt;p&gt;I woke up, took a long breakfast, sitting at my office and started tackling the &lt;em&gt;ghost&lt;/em&gt; that spent the night before haunting me. (In this case, my brain + overthinking)&lt;/p&gt;

&lt;p&gt;Some of his tips, thou, did really help!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fixing these two queries(&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/7e0d606bcc5550a07976d15dae8e71d85b011a1e#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R52"&gt;1&lt;/a&gt;,&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/7e0d606bcc5550a07976d15dae8e71d85b011a1e#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R97"&gt;2&lt;/a&gt;) business rules&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/f16c2fc9d4b5d8c44edd50dd578fb70688491aa8#diff-1f91f64bfa3fb0a1ece881887af0a2356b8c529a96fb91c1894745b2a1a009aaR12"&gt;Adding more replicas&lt;/a&gt; into NGINX and DockerCompose&lt;/li&gt;
&lt;li&gt;Fine tuning &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/f16c2fc9d4b5d8c44edd50dd578fb70688491aa8#diff-3fde9d1a396e140fefc7676e1bd237d67b6864552b6f45af1ebcc27bcd0bb6e9R14"&gt;resources&lt;/a&gt; on DockerCompose&lt;/li&gt;
&lt;li&gt;Fixing PINO &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/7e0d606bcc5550a07976d15dae8e71d85b011a1e#diff-ed4a110a094ac014f4bea8d6d93719d89d5e5a2d1974dc2f79e41faf30af470aR4"&gt;async logging&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Configuring NGINX &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/7e0d606bcc5550a07976d15dae8e71d85b011a1e#diff-1f91f64bfa3fb0a1ece881887af0a2356b8c529a96fb91c1894745b2a1a009aaR4"&gt;worker connections&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_oeL99Rn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7v4j81s36e2afo7ucyzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_oeL99Rn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7v4j81s36e2afo7ucyzu.png" alt="68 failure" width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was able to get some good improvements reaching ~31% of &lt;code&gt;success&lt;/code&gt; which really sounded way better than 10%. But man, something was really off. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;&lt;strong&gt;Why did my application performed so bad?&lt;/strong&gt;&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Well. I had forgotten to address yesterday's finding. The &lt;em&gt;database bottleneck&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;First thing I did was to copy &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/b374c30cc4d0f0c302378a21ead5debf1ec6fcbd#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R86"&gt;the database query&lt;/a&gt; and ran it against at least 40K patients. Surprisingly, it took &lt;em&gt;50 seconds&lt;/em&gt; even thou I was &lt;strong&gt;certain&lt;/strong&gt; I had indexes setup.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But before I explain the database optimization I did I need to explain the structure I had before the improvement.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;pessoas&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="n"&gt;id&lt;/span&gt; &lt;span class="nb"&gt;SERIAL&lt;/span&gt; &lt;span class="k"&gt;PRIMARY&lt;/span&gt; &lt;span class="k"&gt;KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;apelido&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;UNIQUE&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;nome&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;nascimento&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="n"&gt;stack&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;)[]&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;term_search_index_apelido&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;pessoas&lt;/span&gt; &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;gin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;to_tsvector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'english'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;apelido&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;INDEX&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;term_search_index_nome&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;pessoas&lt;/span&gt; &lt;span class="k"&gt;USING&lt;/span&gt; &lt;span class="n"&gt;gin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;to_tsvector&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'english'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nome&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The best tool to analyze what steps the database took to run the query (and much more) is the &lt;a href="https://www.postgresql.org/docs/current/sql-explain.html"&gt;EXPLAIN&lt;/a&gt;.&lt;br&gt;
And of course I had &lt;em&gt;forgotten&lt;/em&gt; a index. But how I would index a ARRAY field for FTS? That was a bad design choice. Arrays can't be text indexed, the performance is kinda bad for this use-case.&lt;/p&gt;

&lt;p&gt;Well, let's ship this &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/f1a3c2c8aac01bf30bac38f23cc9335f8752d5e8#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R71"&gt;responsibility to the client&lt;/a&gt;. The &lt;code&gt;stack&lt;/code&gt; field &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/f1a3c2c8aac01bf30bac38f23cc9335f8752d5e8#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R21"&gt;became a JSON field&lt;/a&gt; which the client deserializes and serializes. The JSON field is text-indexed and voilá, we now have a index for stack field.&lt;/p&gt;

&lt;p&gt;Run the query again: Less than 20ms. Cool. That's what's expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S4wLTJsO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trewp8ub58ijgleyk9le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S4wLTJsO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/trewp8ub58ijgleyk9le.png" alt="after indexes" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not surprisingly I reached 45% of &lt;code&gt;success&lt;/code&gt; and 27% of &lt;code&gt;&amp;gt;1200ms&lt;/code&gt; requests. My database lowered CPU usage and errors like premature connection, timeouts and connection losses only appeared by the middle-to-end of the stress test.&lt;br&gt;
Database-wise I thought that &lt;em&gt;It was as far as I could get&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lEcnSkci--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hbpf8mgiqulnavoui48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lEcnSkci--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hbpf8mgiqulnavoui48.png" alt="mid to end failover" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  It's time for cache.
&lt;/h2&gt;

&lt;p&gt;I decided to use Redis instead of LRU memory caches since I had distributed applications and I was aiming to have at least 4-5 replicas which would make the same resource not to frequently requested on the same pod/container.&lt;/p&gt;

&lt;h3&gt;
  
  
  The caching strategy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;/pessoas?t

&lt;ul&gt;
&lt;li&gt;Cached the response.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;POST: /pessoas

&lt;ul&gt;
&lt;li&gt;Cached the the entire resource by ID after creation&lt;/li&gt;
&lt;li&gt;Cached the &lt;code&gt;apelido&lt;/code&gt; field since it had a unique constraint on the database.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;GET: /pessoas/:id

&lt;ul&gt;
&lt;li&gt;Checked cache before hitting database.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;On the validation middleware I had setup(&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/f1a3c2c8aac01bf30bac38f23cc9335f8752d5e8#diff-e7145ea3e4f5db1f4bc1c5be397238ee0c522c81afa6a7b8228261f54aa4e704R79"&gt;middleware.js&lt;/a&gt;)

&lt;ul&gt;
&lt;li&gt;Checked if person already existed on Redis SET and if not checked database. If it did exists, we updated the cache and returned the response back to the client&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is a rather simple Redis caching setup.&lt;br&gt;
&lt;em&gt;(Which I would never ship to production environments at least)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WPSPvuro--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nk82we9dgcqc3c5jv3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WPSPvuro--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nk82we9dgcqc3c5jv3t.png" alt="caching results" width="800" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally an acceptable &lt;code&gt;success&lt;/code&gt; rate! 92% of &lt;code&gt;success&lt;/code&gt; being 4% of requests above 1200ms.&lt;/p&gt;

&lt;p&gt;But still...That database CPU usage &lt;em&gt;&lt;strong&gt;too&lt;/strong&gt;&lt;/em&gt; high even with caching and the connection closes was still a thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Back Into DB Optimizations
&lt;/h2&gt;

&lt;p&gt;I figured I had a way over the top &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/05302873d38cce4f7417bc56de1a2bf9617a5462#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R8"&gt;PG connection pool configured&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And it was kinda reasonable given that we were running 3-4 replicas, each one with 8 connection on a database that had less than 1 CPU allocated.&lt;/p&gt;

&lt;p&gt;(I've also reduce to &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/05302873d38cce4f7417bc56de1a2bf9617a5462#diff-1f91f64bfa3fb0a1ece881887af0a2356b8c529a96fb91c1894745b2a1a009aaR12"&gt;3 replicas&lt;/a&gt; of the application)&lt;/p&gt;

&lt;p&gt;With this &lt;em&gt;small&lt;/em&gt; improvement I was able to reach 100% of &lt;code&gt;success&lt;/code&gt; rate&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--N-CQ-b4d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1hqx7wtu1ow7odtry3ne.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--N-CQ-b4d--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1hqx7wtu1ow7odtry3ne.png" alt="pg optimization_1" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sKpDci51--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ry74gtimzefsinxdnz1h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sKpDci51--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ry74gtimzefsinxdnz1h.png" alt="pg optimization_2" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally I had the same benchmarks from my fellow colleagues but more than so I realized how much the &lt;code&gt;attention to detail&lt;/code&gt; needs to be taken seriously when dealing with scale. This was a sandbox experiment and I had a lot of small issues, small &lt;code&gt;legacy-type&lt;/code&gt; code (1 day legacy. I'll call it), small premature optimizations and some over the top gimmicks trying to figure out simple stuff.&lt;/p&gt;

&lt;p&gt;I had accomplished what I proposed to myself. Although I knew It had way more possible optimization to be done but I decided to step-way from competition.&lt;/p&gt;

&lt;p&gt;But this is for the next talk...&lt;/p&gt;

</description>
      <category>rinhadebackend</category>
      <category>javascript</category>
      <category>node</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Brazilian "Rinha de Backend" challenge #1 - How To Fail</title>
      <dc:creator>Lucas Weis Polesello</dc:creator>
      <pubDate>Mon, 28 Aug 2023 13:41:04 +0000</pubDate>
      <link>https://dev.to/lukas8219/brazilian-rinha-de-backend-challenge-1-how-to-fail-13n4</link>
      <guid>https://dev.to/lukas8219/brazilian-rinha-de-backend-challenge-1-how-to-fail-13n4</guid>
      <description>&lt;p&gt;Father's Day at Brazil I received a message from a friend of mine talking about a tech challenge. &lt;a href="https://github.com/zanfranceschi/rinha-de-backend-2023-q3"&gt;A quite simple one: Write a API, that has at least 2 replicas, a load balancer and a database, so someone can try to tear it down with a Stress Test.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I though to myself. &lt;em&gt;That&lt;/em&gt; is the next level of &lt;strong&gt;CRUD developer&lt;/strong&gt;, so let's try it out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/b374c30cc4d0f0c302378a21ead5debf1ec6fcbd"&gt;The initial idea&lt;/a&gt;, by which the &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commits/main"&gt;commit history&lt;/a&gt; made it clear I mistakenly called it "finished", was:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NodeJS Express&lt;/li&gt;
&lt;li&gt;Postgres

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/b374c30cc4d0f0c302378a21ead5debf1ec6fcbd#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R24"&gt;Indexes for FTS (&lt;code&gt;gin(to_tsvector(lan, field));&lt;/code&gt;)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/b374c30cc4d0f0c302378a21ead5debf1ec6fcbd#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R20"&gt;Stack being a Array field&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Batching

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/b374c30cc4d0f0c302378a21ead5debf1ec6fcbd#diff-c79e7bf3fe4949f3acfa89d636f82f6e794a2348c0c25763fd5fcb05a3380bb9R113"&gt;All read requests to database&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I knew this configuration was far away from aiming for any TOP 20 but I had &lt;em&gt;confidence&lt;/em&gt; it wouldn't fail that much.&lt;/p&gt;

&lt;p&gt;After my first NGINX configuration and Docker-Compose YAML created I boldly applied to the challenge.&lt;/p&gt;

&lt;p&gt;Sitting at the top of my confidence and not-so worried about it I noticed the competitors publishing screenshots of their own benchmarks.I had realized how much I had underestimated it.&lt;/p&gt;

&lt;p&gt;Gatling installed locally, lets give it a shot. To be fair...How bad can my application perform, right?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FaGOs5R_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unkbwsyag86k3kakwzbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FaGOs5R_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/unkbwsyag86k3kakwzbe.png" alt="Fantastical failure" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets not talk about how disappointed I got but how worthy it was to try the tool locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  So, what was happening?
&lt;/h3&gt;

&lt;p&gt;Besides a bunch of &lt;a href="https://stackoverflow.com/questions/36488688/nginx-upstream-prematurely-closed-connection-while-reading-response-header-from"&gt;premature closed connections&lt;/a&gt;, no healthy upstream, 60000ms timeouts, there was no way I my application was bottlenecking that much...&lt;/p&gt;

&lt;p&gt;I tried a lot of crazy stuff:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clustering - via &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/blob/7e0d606bcc5550a07976d15dae8e71d85b011a1e/Dockerfile#L8"&gt;PM2&lt;/a&gt; or &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/blob/7e0d606bcc5550a07976d15dae8e71d85b011a1e/index.js#L63"&gt;native cluster module&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Closing &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/blob/7e0d606bcc5550a07976d15dae8e71d85b011a1e/index.js#L80"&gt;slow requests&lt;/a&gt; early&lt;/li&gt;
&lt;li&gt;Adjusted some &lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/blob/7e0d606bcc5550a07976d15dae8e71d85b011a1e/database.js#L97"&gt;business rules&lt;/a&gt; which were off&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/lukas8219/rinha-be-2023-q3/commit/7e0d606bcc5550a07976d15dae8e71d85b011a1e#diff-1f91f64bfa3fb0a1ece881887af0a2356b8c529a96fb91c1894745b2a1a009aaR4"&gt;Reconfigured NGINX&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Changing LIBUV &lt;code&gt;process.env.UV_THREADPOOL_SIZE = os.cpus().length&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I was very confident I could get at least into 20% of success...Come on...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B5v2mEYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scb52d356ymhvofjrdbn.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B5v2mEYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/scb52d356ymhvofjrdbn.jpeg" alt="api errors after running gatling" width="800" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had to do &lt;em&gt;something&lt;/em&gt;. I was totally blind and didn't knew where to start.&lt;/p&gt;

&lt;p&gt;After running a &lt;code&gt;watch "docker stats"&lt;/code&gt; I was able to finally get a grasp on it. My database was being &lt;strong&gt;hammered&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lNTh7IlZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xp2jlryoabectuqvhpsz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lNTh7IlZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xp2jlryoabectuqvhpsz.png" alt="database bottlenecking locally" width="800" height="122"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time to take it seriously. More than competing with my fellow colleagues &lt;em&gt;I wanted to compete with myself&lt;/em&gt;. &lt;em&gt;I knew I could do &lt;strong&gt;better&lt;/strong&gt;&lt;/em&gt;. It was a long time I got so excited about something I knew nothing about.&lt;/p&gt;

&lt;p&gt;In that late Sunday I already &lt;em&gt;knew&lt;/em&gt; Monday would be &lt;em&gt;one of those hyperfocus ADHD days&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/lukas8219/brazilian-rinha-de-backend-challenge-2-the-improvement-1a9h"&gt;Next chapter&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
