<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gabriel Urbina</title>
    <description>The latest articles on DEV Community by Gabriel Urbina (@gabrielrurbina).</description>
    <link>https://dev.to/gabrielrurbina</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gabrielrurbina"/>
    <language>en</language>
    <item>
      <title>The hidden cost of temporally allocated objects in JavaScript</title>
      <dc:creator>Gabriel Urbina</dc:creator>
      <pubDate>Thu, 16 Nov 2023 20:15:22 +0000</pubDate>
      <link>https://dev.to/gabrielrurbina/the-hidden-cost-of-temporally-allocated-objects-in-javascript-31ok</link>
      <guid>https://dev.to/gabrielrurbina/the-hidden-cost-of-temporally-allocated-objects-in-javascript-31ok</guid>
      <description>&lt;p&gt;If JavaScript is single-threaded then running this code with one core or with several should make no difference, or should it?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const syntheticOp = ({ a, b }) =&amp;gt; {
    let res = [];
    for (let i = 0; i &amp;lt; 200000 ; i++) {
        res[i] = { val: a * b * BigInt(i) };
    }
    return res;
};

function main(){
    for (let i = 0; i &amp;lt; 100; i++) {
        syntheticOp({
            a: BigInt(Math.floor(Math.random() * 1000)),
            b: BigInt(Math.floor(Math.random() * 1000)),
        });
    }
}
main();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's run it with one core &lt;code&gt;taskset -c 15 node --inspect-brk synthetic.js&lt;/code&gt; and profile it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L_LecOuz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159281415/7052591b-0253-41d5-8876-d6e2403ecdd6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L_LecOuz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159281415/7052591b-0253-41d5-8876-d6e2403ecdd6.png" alt="" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The whole operation took over 7 seconds and most of the time (60%) of the time was taken by garbage collection pauses, now we run it with 4 cores &lt;code&gt;taskset -c 12,13,14,15 node --inspect-brk synthetic.js&lt;/code&gt; and see what the profiler tells us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gYacub9i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159368130/782aa34d-bf1b-4ed4-aef6-06571b8e69cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gYacub9i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159368130/782aa34d-bf1b-4ed4-aef6-06571b8e69cd.png" alt="" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hmm, fascinating, why is the 4 cores execution near 2x faster than the single core execution?, and why the speed up is mostly on the garbage collector, that no longer takes most of the execution time? well, we do know why; &lt;strong&gt;Java-script is single-threaded, its garbage collector is not&lt;/strong&gt; , &lt;a href="https://v8.dev/blog/trash-talk"&gt;see V8's Orinoco&lt;/a&gt;, so the performance gain is on the back of the garbage collector, this is great isn't?, it is, as long as the developer is counting on deploying its application on a multi-threaded environment, and we rarely do.&lt;/p&gt;

&lt;p&gt;The more objects we allocate, the more GC will take time to collect, slowing down our application/program, effectively turning object allocations into a processing power loan that have to pay back on the GC pauses, somewhat of an oversimplification, but good enough to understand the price we pay for memory allocation.&lt;/p&gt;

&lt;p&gt;You might be interested on a concrete case, not just a synthetic test, so lets take a look at my library &lt;a href="https://github.com/gabrielricardourbina/type-guard"&gt;type-guard&lt;/a&gt; on version version &lt;em&gt;0.2.2&lt;/em&gt;, benchmark it with large objects and inspect memory and execution profiles.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {
    ObjectOf, isString, isNumber, ArrayOf
} from "@gabrielurbina/type-guard";

const persons = Array.from({ length: 100000 }, () =&amp;gt; ({
    firstName: Math.random().toString(36).slice(2),
    lastName: Math.random().toString(36).slice(2),
    age: Math.floor(Math.random() * 100),
}));

const isPersons = ArrayOf([
    ObjectOf({
        firstName: isString,
        lastName: isString,
        age: isNumber,
    }),
]);

function main() {
    for (let i = 0; i &amp;lt; 100; i++) {
        isPersons(persons);
    }
}

main();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_8fpyYYK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159567560/0541ca3c-d73e-43e9-9fad-77a2cbf85d69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_8fpyYYK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159567560/0541ca3c-d73e-43e9-9fad-77a2cbf85d69.png" alt="" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1.4% of GC usage is great, isn't,well it could be better, let's have a look at how much memory are we creating here&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D5gVxihb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159745977/62d5fc97-f1ac-403e-b843-6c55dec2579a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D5gVxihb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159745977/62d5fc97-f1ac-403e-b843-6c55dec2579a.png" alt="" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wow, we are creating a few millions objects during the execution of this benchmark, that can definitely be improved, and I already did on version &lt;a href="https://github.com/gabrielricardourbina/type-guard/commit/3de9bfe226a60ddf8090f0da2c9b64190459c979"&gt;version 0.2.3&lt;/a&gt; where I dropped all temporally allocated objects, and if we inspect the same metrics for this version you might be surprised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nJbyPDGb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159651258/8558d185-c68e-4959-a88c-5d4e2f1fd3d1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nJbyPDGb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159651258/8558d185-c68e-4959-a88c-5d4e2f1fd3d1.png" alt="" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;During the execution of the benchmark, there was no GC pause, which is phenomenal, also this version is close to 2x faster, but how about the memory usage?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--taPQ6_wG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159692530/88762296-ae56-4f9b-ab84-dfd986e55b46.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--taPQ6_wG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1700159692530/88762296-ae56-4f9b-ab84-dfd986e55b46.png" alt="" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are talking, I managed to reduce the memory usage from few millions objects to near a hundred objects, also to have a constant memory usage over time, after initialization, the memory usage does not increase, &lt;em&gt;making this library perfectly suitable for running on single-core applications and for indefinitely long processes&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Buy why, why does it matter, these are just some extra ms of runtime and some extra Megabytes of memory? Because I just want to ship non-pessimized code, I also recognize that the state of the JavaScript ecosystem performance is as poor as it is, not because one single master library, but because each ms and MB not optimized adds up to our already sluggish ecosystem.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
