<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matt Krick</title>
    <description>The latest articles on DEV Community by Matt Krick (@mattkrick).</description>
    <link>https://dev.to/mattkrick</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mattkrick"/>
    <language>en</language>
    <item>
      <title>Replacing Express with uWebSockets</title>
      <dc:creator>Matt Krick</dc:creator>
      <pubDate>Mon, 06 Jan 2020 15:48:28 +0000</pubDate>
      <link>https://dev.to/mattkrick/replacing-express-with-uwebsockets-48ph</link>
      <guid>https://dev.to/mattkrick/replacing-express-with-uwebsockets-48ph</guid>
      <description>&lt;p&gt;One of the best parts of running an enterprise SaaS is that our traffic takes a nosedive at the end of the year while clients universally take vacation. The low traffic is a great excuse for larger refactors and with our crazy growth this year, we've been considering scaling our server horizontally. Before we do, I figured it'd be smart to squeak out as much performance as possible. So, after 4 years, we ditched Express for something faster: &lt;a href="https://github.com/uNetworking/uWebSockets.js"&gt;uWebSockets&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;uWebSockets is lighting fast because it is so low level. Saying it is like Express without the training wheels is an understatement. It's more like taking off the training wheels, reflectors, mud guards, hand grips, seat, and then stripping the paint because, well, speed. While I appreciate the speed and low memory footprint, I also don't want to run the risk of my sloppy code crashing the server, so my goal is to make a couple reasonable performance sacrifices to make it as safe as an Express framework. In other words, I'll take the bike-- just give me a darn helmet.&lt;/p&gt;

&lt;p&gt;Practically, that means I don't want to worry about a call to Redis somehow failing, which throws an uncaught promise rejection, thus hanging the response and in turn hanging the server. To save myself from myself, I came up with a few reasonable patterns to avoid both rewriting my sloppy code and a crash. Hopefully, you find them useful, too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Response Handling
&lt;/h2&gt;

&lt;p&gt;At all costs, we must close the &lt;code&gt;HttpResponse&lt;/code&gt; or it will hang and bring the server to its knees. There are 2 ways the response can close: calling a terminating method (&lt;code&gt;end&lt;/code&gt;, &lt;code&gt;tryEnd&lt;/code&gt; or &lt;code&gt;close&lt;/code&gt;) or being hung up on by the client (&lt;code&gt;onAborted&lt;/code&gt; fires). Unfortunately, once the response has been closed, &lt;em&gt;you cannot attempt to close it again&lt;/em&gt;. That restriction creates a race condition. Imagine the scenario where a request comes in to read a record from the DB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In a perfect world, the doc from the DB returns and gets sent as the response. In the real world, the client disconnects just after the call to the DB is made. When that happens, the socket is closed, &lt;code&gt;onAborted&lt;/code&gt; fires, and by the time &lt;code&gt;res.end&lt;/code&gt; is called, the response has already been invalidated, which produces an error.&lt;/p&gt;

&lt;p&gt;To tackle this problem, I need to guarantee 3 things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A terminating method is not called after &lt;code&gt;onAborted&lt;/code&gt; fires&lt;/li&gt;
&lt;li&gt;A terminating method is not called after a terminating method was already called&lt;/li&gt;
&lt;li&gt;There is only 1 &lt;code&gt;onAborted&lt;/code&gt; handler for each response&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To accomplish #1 &amp;amp; #2 without rewriting all my route handlers, I decided to &lt;a href="https://github.com/ParabolInc/action/blob/staging/packages/server/safetyPatchRes.ts"&gt;monkeypatch the response&lt;/a&gt; with some safety checks. For example, I put a &lt;code&gt;done&lt;/code&gt; one-way flag on the response and if a terminating method is called after the response is already &lt;code&gt;done&lt;/code&gt;, it is ignored:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`uWS DEBUG: Called end after done`&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;_end&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Guaranteeing a single &lt;code&gt;onAborted&lt;/code&gt; handler was necessary because there were some cases where the thing I was trying to clean up (e.g. a &lt;code&gt;ReadStream&lt;/code&gt; or &lt;code&gt;setInterval&lt;/code&gt; id) was created after &lt;code&gt;onAborted&lt;/code&gt; was already made. To keep my code modular, I again monkeypatched &lt;code&gt;onAborted&lt;/code&gt; to support multiple handlers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onAborted&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abortEvents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abortEvents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;f&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onAborted&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abortEvents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abortEvents&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;abortEvents&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Async Handler Wrapping
&lt;/h2&gt;

&lt;p&gt;With uWebSockets, async http handlers also require extra care. Aside from having to &lt;code&gt;cork&lt;/code&gt; response methods to achieve maximum performance, errors can creep in from various sources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attempting to close an already closed response (as discussed above)&lt;/li&gt;
&lt;li&gt;An unplanned error (uncaught exception, unhandled promise rejection)&lt;/li&gt;
&lt;li&gt;Returning without closing the response&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Since I'm not sure &lt;em&gt;where&lt;/em&gt; these errors may live, the safest bet is to  apply the guards as soon as possible, at the beginning of the handler. To keep my code DRY, I wrapped each async handler in a higher order function that catches the 3 error types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uWSAsyncHandler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;uWSHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;HttpRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;monkeyPatchRes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Async handler did not respond&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;writeStatus&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;500&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nx"&gt;sendToReportingService&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It's a pretty simple function. First, it monkeypatches the res so we get free &lt;code&gt;done&lt;/code&gt; tracking (Type #1). Then, it tries to execute the handler. If the handler throws an error (Type #2), or it returns without closing the response (Type #3), it gracefully closes the connection and reports the error to our monitoring service. With very little computational overhead, I can keep on writing sloppy code and not worry about crashing the server. Success! 🎉&lt;/p&gt;

&lt;h2&gt;
  
  
  Body Parsing
&lt;/h2&gt;

&lt;p&gt;The code example in the uWebSockets repo does a great job of showing how to parse an incoming body. Written as a promise, it can be quite elegant:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parseBody&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="na"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;onData&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;isLast&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;curBuf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;from&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="nx"&gt;buffer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;buffer&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;curBuf&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; 
               &lt;span class="nx"&gt;isLast&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="nx"&gt;curBuf&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nx"&gt;curBuf&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
      &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isLast&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The funny buffer ternary is necessary because &lt;code&gt;onData&lt;/code&gt; reuses the same memory allocation for the following chunk. That means we'll need to clone the buffer by calling &lt;code&gt;concat&lt;/code&gt; or &lt;code&gt;toString&lt;/code&gt; before yielding. I like to return the stringified JSON instead of parsed JSON because sometimes I need the string itself (e.g. SAML response processing or verifying a Stripe webhook payload).&lt;/p&gt;

&lt;p&gt;It's worth noting that uWebSocket's &lt;code&gt;onData&lt;/code&gt; handler doesn't play well with breakpoints when using Node's built-in debugger: &lt;a href="https://github.com/uNetworking/uWebSockets.js/issues/191"&gt;Issue #191&lt;/a&gt;. To mitigate that issue, you can simply clone the chunk and resolve inside a &lt;code&gt;setImmediate&lt;/code&gt; call. Since that has a nontrivial amount of overhead, I only do it when Node is in debuging mode (&lt;code&gt;process.execArgv.join().includes('inspect')&lt;/code&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Serve Static
&lt;/h2&gt;

&lt;p&gt;Almost all of our assets are served from our CDN in production. However, there are a few exceptions: &lt;code&gt;index.html&lt;/code&gt;, &lt;code&gt;serviceWorker.js&lt;/code&gt;, and everything in development mode. So, I needed something like Express' &lt;code&gt;serve-static&lt;/code&gt; that did the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Serve whitelisted items from memory to reduce disk reads &lt;/li&gt;
&lt;li&gt;Serve those whitelisted items in a compressed format, if supported&lt;/li&gt;
&lt;li&gt;Support webpack-dev-middleware by serving webpack assets in development&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the first two were rather simple to implement (&lt;a href="https://github.com/ParabolInc/action/blob/staging/packages/server/utils/serveStatic.ts"&gt;actual code here&lt;/a&gt;), supporting webpack-dev-middleware is a bit more interesting. Since performance in development isn't an issue and I wasn't trying to rewrite webpack-dev-middleware from scratch, I decided to simply pass it something that looked like an Express handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;makeExpressHandlers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;HttpRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;setHeader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;unknown&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;writeHeader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nb"&gt;String&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getUrl&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getMethod&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;toUpperCase&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="nx"&gt;headers&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;next&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Since the middleware thinks it's getting a standard Express response, checking the result is as easy as checking the &lt;code&gt;res.statusCode&lt;/code&gt; as seen &lt;a href="https://github.com/ParabolInc/action/blob/staging/packages/server/serveFromWebpack.ts#L80"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  WebSocket Performance
&lt;/h3&gt;

&lt;p&gt;The biggest benefit to moving to uWebSockets is, of course, the fast &amp;amp; memory efficient WebSocket server. While most WebSocket messages are tiny, frequent messages like pongs and peer updates, some initial payloads could get rather large, up to 45KB in our case. Assuming an MTU of 1500 bytes, that's 30 packets! Since WebSockets are built on top of TCP, which guarantees that packets arrive in order, this meant users with less-than-great connectivity could experience significant lag. Combatting this was easy: reduce the number of packets via compression. Using uWebSocket's &lt;code&gt;SHARED_COMPRESSOR&lt;/code&gt; and monitoring packet size with WireShark, I could reduce the 45KB payload down to 6KB with no additional memory overhead, but I was left wondering if I could still do better. &lt;a href="https://github.com/mattkrick/json-deduper"&gt;Deduplicating JSON objects&lt;/a&gt; and using &lt;a href="https://github.com/msgpack/msgpack-javascript"&gt;msgpack&lt;/a&gt; only yielded savings of an extra 100 bytes each, which was hardly worth the extra computational overhead. So, I decided to look deeper. &lt;/p&gt;

&lt;p&gt;First, WebSocket extensions only support the DEFLATE compression algorithm, which yields results about 30% bigger than Brotli compression. Second, there's no way to selectively compress messages, which means CPU cycles were being wasted compressing messages from the browser as well as single-packet messages from the server. So, I brought compression to the application layer. Since most browser messages to the server were very small, it made no sense compressing them, which means the client only needed a decompressor. I wrapped a Brotli decompressor written in Rust into a WASM package. I chose WASM over JS because in my tests (using Chrome 79), it was over 10x faster at decompression. On the server, I only compressed messages larger than 1400 bytes (100 bytes smaller than the MTU limit to account for headers) to guarantee compression would result in at least 1 less packet. The end result is best-in-class compression where you need it, and no compression where you don't. Best of both worlds! The only drawback is the size: the WASM decompressor compiles to about 300KB. To get around this, I compress it and persist it with a service worker to make sure it doesn't affect returning users. This works for us because we only use WebSockets for users who are logged in, however your business logic may be different and it's very likely that the added complexity of custom compression may not be worth the marginal savings. The only way to know is to measure, so I'll be testing that over the coming months.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Overall, I'm thrilled with uWebSockets. Not simply for the performance boost, but because it forced me to refactor a server that grew a little unwieldy as we've grown from nothing to a seed-funded startup with over 16,000 users. If this stuff sounds like fun to you, get paid to work on it! We're a remote team, our codebase is open source, and if you're reading articles like this one, chances are we already like you. Reach out to me directly or apply at &lt;a href="https://www.parabol.co/join"&gt;https://www.parabol.co/join&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>node</category>
      <category>webdev</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Fixing Memory Leaks in Node Apps </title>
      <dc:creator>Matt Krick</dc:creator>
      <pubDate>Thu, 12 Dec 2019 16:12:48 +0000</pubDate>
      <link>https://dev.to/mattkrick/fixing-memory-leaks-in-node-apps-5eh4</link>
      <guid>https://dev.to/mattkrick/fixing-memory-leaks-in-node-apps-5eh4</guid>
      <description>&lt;p&gt;A few months back, our web server crashed. It only lasted a minute before restarting, but as the tech guy in a small startup, it was a pretty stressful minute. I never set up a service to restart when memory got low, but we did have some reporting tools connected, so after the crash, I dug into our logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5t4O8quy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fiety4wewaiaiq25pqa6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5t4O8quy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/fiety4wewaiaiq25pqa6.png" alt="Dog"&gt;&lt;/a&gt;&lt;br&gt;
Yep, that's a memory leak alright! But how could I track it down?&lt;/p&gt;
&lt;h2&gt;
  
  
  Just like LEGOs
&lt;/h2&gt;

&lt;p&gt;When debugging, I like to think about memory like LEGOs. Every object created is a brick. Every object type, a different color. The heap is a living room floor and I (the Garbage Collector) clean up the bricks no one is playing with because if I don't, the floor would be a minefield of painful foot hazards. The trick is figuring out which ones aren't being used.&lt;/p&gt;
&lt;h2&gt;
  
  
  Debugging
&lt;/h2&gt;

&lt;p&gt;When it comes to triaging memory leaks in Node, there are 2 strategies: snapshots and profiles. &lt;/p&gt;

&lt;p&gt;A snapshot (AKA heap dump) records everything on the heap at that moment. &lt;br&gt;
It's like taking a photograph of your living room floor, LEGOs and all. If you take 2 snapshots, then it's like a Highlights magazine: find the differences between the 2 pictures and you've found the bug. Easy!&lt;/p&gt;

&lt;p&gt;For this reason, snapshots are the gold standard when it comes to finding memory leaks. Unfortunately, taking a snapshot can last up to a minute. During that time, the server will be completely unresponsive, which means you'll want to do it when no one is visiting your site. Since we're an enterprise SaaS, that means Saturday at 3AM. If you don't have that luxury, you'll need to have your reverse proxy redirect to a backup server while you dump.&lt;/p&gt;

&lt;p&gt;A sampling allocation profile is the lightweight alternative, taking less than a second. Just as the name implies, it takes a sample of all the objects getting allocated. While this produces a very easy-on-the-eyes flamechart akin to a CPU profile, it doesn't tell you what's being garbage collected. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wz3QIsx0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6y0b1s454e4w6pu1tw7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wz3QIsx0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/6y0b1s454e4w6pu1tw7s.png" alt="Profile"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's like looking at the LEGOs being played with, but not looking at which ones are being put down. If you see 100 red bricks and 5 blue bricks, there's a good chance the red bricks could be the culprit. Then again, it's equally likely all 100 red bricks are being garbage collected and it's just the 5 blues that are sticking around. In other words, you'll need both a profile and deep knowledge of your app to find the leak.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Implementation
&lt;/h2&gt;

&lt;p&gt;In my case, I did both. To set up the profiler, I ran it every hour &amp;amp; if the actual memory used had increased by 50MB, it wrote a snapshot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;heapProfile&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;heap-profile&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;highWaterMark&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
&lt;span class="nx"&gt;heapProfile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="nx"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;memoryUsage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;memoryUsage&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;rss&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;memoryUsage&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;MB&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;usedMB&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rss&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;MB&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;usedMB&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;highWaterMark&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;highWaterMark&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;usedMB&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`sample_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;usedMB&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.heapprofile`&lt;/span&gt;
      &lt;span class="nx"&gt;heapProfile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The snapshot was a little more interesting. While a normal approach is to send a &lt;code&gt;SIGUSR2&lt;/code&gt; signal to the node process using &lt;code&gt;kill&lt;/code&gt;, I don't like that because you know what else can send a &lt;code&gt;SIGUSR2&lt;/code&gt;? Anything. You may have a package in your dependencies right now (or in the future) that emits that same signal and if it does, then your site is going down until the process completes. Too risky, plus a pain to use. Instead, I created a GraphQL mutation for it. I put it on our "Private" (superuser only) schema and can call it using &lt;a href="https://github.com/graphql/graphiql"&gt;GraphiQL&lt;/a&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wrdf8Cie--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/m6fg24xoislg7w5xpwdz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wrdf8Cie--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/m6fg24xoislg7w5xpwdz.png" alt="GraphiQL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code behind the endpoint is dead simple:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;profiler&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;v8-profiler-next&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;snap&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;profiler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;takeSnapshot&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transform&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;export&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;toJSON&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`Dumpy_&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.heapsnapshot`&lt;/span&gt;
&lt;span class="nx"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;reject&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;transform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;on&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;finish&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;snap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nx"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We take a snapshot, pipe it to a file, delete the snap, and return the file name. Easy enough! Then, we just upload it to Chrome DevTools Memory Tab and away we go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reading the Dump
&lt;/h2&gt;

&lt;p&gt;While the profile wasn't very helpful, the heap dump got me exactly what I needed. Let's take a look at a leak called &lt;code&gt;ServerEnvironment&lt;/code&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9PKFaX0I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jg9em9768ejcx71u3fhr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9PKFaX0I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/jg9em9768ejcx71u3fhr.png" alt="Dump"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our app, we do some light server side rendering (SSR) for generating emails. Since our app is powered by &lt;a href="https://relay.dev/"&gt;Relay&lt;/a&gt; (a great GraphQL client cache like Apollo), we use what I named a &lt;code&gt;ServerEnvironment&lt;/code&gt; to fetch the data, populate the components, and then go away. So why are there 39 instances? Who's still playing with those LEGOs?!&lt;/p&gt;

&lt;p&gt;The answer rests in the Retainers section. In plain English, I read the table like this, "&lt;code&gt;ServerEnvironment&lt;/code&gt; can't be garbage collected because it is item &lt;code&gt;56&lt;/code&gt; in a &lt;code&gt;Map&lt;/code&gt;, which can't be garbage collected because it is used by object &lt;code&gt;requestCachesByEnvironment&lt;/code&gt;. Additionally, it's being used by &lt;code&gt;environment&lt;/code&gt;, which is used by &lt;code&gt;_fetchOptions&lt;/code&gt;, which is used by &lt;code&gt;queryFetcher&lt;/code&gt; which is used by" ...you get it. So &lt;code&gt;requestCachesByEnvironment&lt;/code&gt; and &lt;code&gt;requestCache&lt;/code&gt; are the culprits.&lt;/p&gt;

&lt;p&gt;If I look for the first one, I find the offender in just a couple lines of code (edited for brevity, original file &lt;a href="https://github.com/facebook/relay/blob/597d2a17aa29d401830407b6814a5f8d148f632d/packages/relay-runtime/query/fetchQueryInternal.js#L37"&gt;here&lt;/a&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requestCachesByEnvironment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;getRequestCache&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;cached&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;requestCachesByEnvironment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;requestCache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Map&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nx"&gt;requestCachesByEnvironment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;requestCache&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;requestCachesByEnvironment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="kd"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;This is your classic memory leak. It's an object at the outermost closure of a file that's being written to by a function in an inner closure &amp;amp; no &lt;code&gt;delete&lt;/code&gt; call to be found. As a general rule of thumb, writing to variables in outer closures are fine because there's a limit, but writing to objects often leads to problems like this since the potential is unbounded. Since the object isn't exported, we know we have to patch this file. To fix, we could write a cleanup function, or we can ask ourselves 2 questions:&lt;br&gt;
1) Is that Map being iterated over? &lt;strong&gt;No&lt;/strong&gt;&lt;br&gt;
2) If the Map item is removed from the rest of the app does it need to exist in the Map? &lt;strong&gt;No&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since the answer to both questions is &lt;strong&gt;No&lt;/strong&gt;, it's an easy fix! Just turn &lt;code&gt;Map&lt;/code&gt; into &lt;code&gt;WeakMap&lt;/code&gt; and we're set! WeakMaps are like Maps, except they let their keys get garbage collected. Pretty useful!&lt;/p&gt;

&lt;p&gt;The second retainer can be tracked down to &lt;code&gt;requestCache&lt;/code&gt;. Instead of a &lt;code&gt;Map&lt;/code&gt;, this is a plain old JavaScript object, again kept in the outermost closure (notice a pattern here? it's a bad pattern). While it'd be great to achieve this in a single closure, that'd require a big rewrite. A shorter, elegant solution is to wipe it if it's not running in the browser, &lt;a href="https://github.com/facebook/relay/pull/2883/files#diff-0b70544d53829f736aa57437fc3bd931"&gt;seen here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;With those 2 fixes, our &lt;code&gt;ServerEnvironment&lt;/code&gt; is free to be garbage collected and the memory leak is gone! All that's left to do is make the fixes upstream and use the new version. Unfortunately, that can take weeks/months/never happen. For immediate gratification, I like to use the FANTASTIC &lt;a href="https://github.com/ramasilveyra/gitpkg"&gt;gitpkg&lt;/a&gt; CLI that publishes a piece of a monorepo to a specific git tag of your fork. I never see folks write about it, but it has saved me so much time forking packages I had to share.&lt;/p&gt;

&lt;p&gt;Memory leaks happen to everyone. Please note that I'm not picking on code written by Facebook to be rude, insult, or take some weird political stance against their company ethics. It's simply because 1) These are memory leaks I found in my app 2) they are textbook examples of the most common kind of leaks and 3) Facebook is kind enough to open source their tooling for all to improve. &lt;/p&gt;

&lt;p&gt;Speaking of open source, if you'd like to spend your time writing open source code from anywhere in the world (👋 from Costa Rica) come join us! We're a bunch of ex-corporate folks on a mission to end pointless meetings &amp;amp; make work meaningful. Check us out at &lt;a href="https://www.parabol.co/join"&gt;https://www.parabol.co/join&lt;/a&gt; or message me directly.&lt;/p&gt;

</description>
      <category>react</category>
      <category>node</category>
      <category>javascript</category>
      <category>graphql</category>
    </item>
    <item>
      <title>Distributed State 101 - Why I forked Facebook's Relay</title>
      <dc:creator>Matt Krick</dc:creator>
      <pubDate>Fri, 12 Jul 2019 14:09:37 +0000</pubDate>
      <link>https://dev.to/mattkrick/distributed-state-101-why-i-forked-facebook-s-relay-1p7d</link>
      <guid>https://dev.to/mattkrick/distributed-state-101-why-i-forked-facebook-s-relay-1p7d</guid>
      <description>&lt;p&gt;Just over a year ago, I forked &lt;a href="https://relay.dev" rel="noopener noreferrer"&gt;Facebook's Relay&lt;/a&gt; to fix a bug that caused an incorrect state based on network latency (yikes!). While the concepts of publish queues and distributed state are pretty complex, the bug itself is darn simple and a great foray into distributed systems, which is why I'm using it here to illustrate the fundamentals (and gotchas!) of building a simple client cache. This isn't a slam against Facebook developers; bugs happen &amp;amp; the shackles of legacy code at a mega corp are real. Rather, if it's something that professional developers at Facebook can goof up on, it can happen to anyone, so let's learn from it!&lt;/p&gt;

&lt;h2&gt;
  
  
  State vs. Transforms
&lt;/h2&gt;

&lt;p&gt;The year is 1999 and I have a counter showing how many people are currently on my fresh new site. If I want that number to update in real time, My server could send 1 of 2 messages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;State: "Hey, the new value is 8."&lt;/li&gt;
&lt;li&gt;Transform: "Hey, add 1 to whatever your counter is currently". &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;State works great for small things like a counter (8), whereas transforms work better for large things like a Google Doc (at position 5, insert "A"). With document stores like Relay, it may seem like a state update (replace old JSON with new JSON), but the server is just sending down a patch that Relay merges into a much larger document tree using a default transform. It then executes any extra transforms in the mutation &lt;code&gt;updater&lt;/code&gt; function. The appearance of state makes it simple, the workings of a transform make it powerful. The perfect combo!&lt;/p&gt;

&lt;h2&gt;
  
  
  Updates and Lamport's Happened-Before
&lt;/h2&gt;

&lt;p&gt;In all client caches, there are 3 kinds of updates: Local, Optimistic, and Server. A local update originates from the client &amp;amp; stays on the client, so it only contains state for that session. An optimistic update originates from the client &amp;amp; simulates the result of a server update so actions feel snappy, regardless of latency. A server update originates from a server and &lt;em&gt;replaces&lt;/em&gt; the optimistic update, if available.&lt;/p&gt;

&lt;p&gt;In all 3 cases, there's just one rule to follow: &lt;em&gt;apply updates in the order they occurred&lt;/em&gt;. If I call an optimistic update, followed by a local update, the optimistic &lt;code&gt;updater&lt;/code&gt; should run first, then pass its result to the local &lt;code&gt;updater&lt;/code&gt;. This concept was cutting edge stuff when Leslie Lamport published it in 1978! Unfortunately, it's what Relay got wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Instead of processing updates in the order they occurred, Relay processes server updates, then local updates, then optimistic updates.&lt;/em&gt;&lt;/strong&gt; That means even though an optimistic update occurred first, Relay applies it &lt;em&gt;after&lt;/em&gt; the local update. That's the crux of the bug.&lt;/p&gt;

&lt;p&gt;Let's use that logic in a simple component like a volume slider that goes from 1 to 10. Say the volume is 3 and I optimistically add 1 to it. Then, I locally set the volume to 10. What's the result? If you guessed 10, you've correctly applied Lamport's relation. If you guessed 11, then you've got a broken app and a &lt;a href="https://github.com/facebook/relay/blob/1a94841b1d7809836ef5b71854d3ade2b5cb4dde/packages/relay-runtime/store/__tests__/RelayPublishQueue-test.js#L1208-L1209" rel="noopener noreferrer"&gt;bright future at Facebook&lt;/a&gt; (Kidding. I'm totally kidding. 😉).&lt;/p&gt;

&lt;h2&gt;
  
  
  A Better Approach
&lt;/h2&gt;

&lt;p&gt;If the current approach isn't mathematically sound, what's the alternative? The answer is pretty easy. Let's take a look at an example publish queue with 4 events: &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fp4wk2ziouimk6v24pevv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fp4wk2ziouimk6v24pevv.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above publish queue starts with 4 events: A local update, a server update, an optimistic update, and another local update. It doesn't matter what each update does because as long as they're applied in the order they occurred, we'll end up with the right answer. &lt;/p&gt;

&lt;p&gt;In Row 1, we know A &amp;amp; B are deterministic (the opposite of optimistic) so we can commit those, meaning we'll never have to "undo" what they did. However, C is optimistic. If the C from the server is divergent from the optimistic C, then everything following could be different as well. For example, what if D were to multiply the result of C by 2? So, we apply those updates to create a current state, but keep them around in case we have to replay them.&lt;/p&gt;

&lt;p&gt;In Row 2, we've got a save point that is the state after A and B have been applied. We've also kept all the events starting with the first optimistic event because they're all dependent on the result coming back from the server. As we wait for that server response, new events like E trickle in. We apply them so that the state is current but also hold onto them.&lt;/p&gt;

&lt;p&gt;In Row 3, the server event for C comes back! We remove the optimistic event and replace it with the server event. Starting from the save point, we commit every event until there's another optimistic event. Since there are no more optimistic events, the queue is empty and we're done! It's really that simple. Now, why does C from the server get to jump in the queue? That's because C &lt;em&gt;occurred&lt;/em&gt; at the time of the optimistic update, but because of latency, it wasn't &lt;em&gt;received&lt;/em&gt; until after E. If you grok that, you grok distributed data types. If you'd like to see what that looks like in code, the package is here: &lt;a href="https://www.github.com/mattkrick/relay-linear-publish-queue" rel="noopener noreferrer"&gt;relay-linear-publish-queue&lt;/a&gt;. Note that it depends on Relay merging &lt;a href="https://github.com/facebook/relay/pull/2791" rel="noopener noreferrer"&gt;this tiny PR&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With such a simple publish queue, it's possible to compare server events to optimistic events when they come in. If the server event just confirms what the optimistic event suspected, then we can flush the queue without performing a recalculation because we know it's correct. Performance gains to come! &lt;/p&gt;

&lt;h2&gt;
  
  
  Real World Application
&lt;/h2&gt;

&lt;p&gt;Theory is boring. Now that we understand it, we can get to the fun stuff! With a functioning publish queue, I built an online sprint retrospective for folks like me who don't like conference rooms. If you're not familiar with a retrospective, it's a meeting where teams anonymously write what could've gone better last sprint, group them by theme, and then discuss the important issues. It's a great engineering habit that is slowly making its way into sales, marketing, and executive teams. While building the grouping phase, I didn't want to lose the ability for everyone to participate simultaneously. That meant building a system that could reliably share when someone else picked up and dragged a card: &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F7c0od5b82l6c5r55njlk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F7c0od5b82l6c5r55njlk.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you'd like to play around with the demo you can check it out &lt;a href="https://www.parabol.co/retro-demo" rel="noopener noreferrer"&gt;here&lt;/a&gt; (no signup necessary) or even &lt;a href="https://github.com/parabolinc/action" rel="noopener noreferrer"&gt;view the source code&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this clears up the purpose of a publish queue! If distributed systems sound interesting, this is only the beginning. From here, you can dive into data types such as Operational Transformations (what Google Docs uses) or serverless CRDTs, such as &lt;a href="https://github.com/automerge/automerge" rel="noopener noreferrer"&gt;Automerge&lt;/a&gt;. If you'd like to get paid to learn about these things while avoiding pants and mega corps, we're hiring a few more remote devs. Reach out.&lt;/p&gt;

</description>
      <category>graphql</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>react</category>
    </item>
  </channel>
</rss>
