<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Bryan English</title>
    <description>The latest articles on DEV Community by Bryan English (@bengl).</description>
    <link>https://dev.to/bengl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bengl"/>
    <language>en</language>
    <item>
      <title>Node.js Heap Dumps in 2021</title>
      <dc:creator>Bryan English</dc:creator>
      <pubDate>Mon, 18 Jan 2021 18:37:03 +0000</pubDate>
      <link>https://dev.to/bengl/node-js-heap-dumps-in-2021-5akm</link>
      <guid>https://dev.to/bengl/node-js-heap-dumps-in-2021-5akm</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/arbazsiddiqui/a-practical-guide-to-memory-leaks-in-node-js-2hbo"&gt;diagnosing memory leaks&lt;/a&gt;, one of the most useful tools in a developer's aresenal is the heap dump, or heap snapshot, which gives us insight into what objects are allocated on the JavaScript, and how many of them. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Old Way
&lt;/h2&gt;

&lt;p&gt;Traditionally, in Node.js, we've had two options for creating heap dumps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Using the &lt;a href="https://npm.im/heapdump"&gt;&lt;code&gt;heapdump&lt;/code&gt;&lt;/a&gt; module.&lt;/li&gt;
&lt;li&gt;Attaching a Chrome DevTools instance and using the Memory tab to create a heap snapshot.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In cases where it's feasible and simple, the second option is usually best, since it requires no additional software, and has a simple point-and-click interface to get the job done.&lt;/p&gt;

&lt;p&gt;In production environments, it's often not an option, so users are left using the &lt;code&gt;heapdump&lt;/code&gt; module. While this generally works without issue, there is an additional compilation step and a module to install in order to get this done. These are obivously not insurmountable hurdles, but they can get in the way of solving a problem quickly.&lt;/p&gt;

&lt;h1&gt;
  
  
  The New Way
&lt;/h1&gt;

&lt;p&gt;The good news is that in newer versions of Node.js, you don't need the external module, since heap dump functionality is now part of the core API, as of Node.js v12.&lt;/p&gt;

&lt;p&gt;To create a heap snapshot, you can just use &lt;a href="https://nodejs.org/dist/latest-v15.x/docs/api/v8.html#v8_v8_getheapsnapshot"&gt;&lt;code&gt;v8.getHeapSnapshot()&lt;/code&gt;&lt;/a&gt;. This returns a readable stream, which you can then pipe to a file, which you can then use in Chrome DevTools.&lt;/p&gt;

&lt;p&gt;For example, you can make a function like this that you can call whenever you want to create a heap dump file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;fs&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;v8&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;v8&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;createHeapSnapshot&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;snapshotStream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;v8&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;getHeapSnapshot&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="c1"&gt;// It's important that the filename end with `.heapsnapshot`,&lt;/span&gt;
  &lt;span class="c1"&gt;// otherwise Chrome DevTools won't open it.&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;.heapsnapshot`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;fileStream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;fs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;createWriteStream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;snapshotStream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pipe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;fileStream&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can call this function on a regular basis using &lt;code&gt;setInterval&lt;/code&gt;, or you can set a signal handler or some other mechanism to trigger the heap dumps manually.&lt;/p&gt;

&lt;p&gt;This new API function is available in all currently supported release lines of Node.js &lt;em&gt;except&lt;/em&gt; for v10, where you'll still need the &lt;code&gt;heapdump&lt;/code&gt; module for similar functionality.&lt;/p&gt;

&lt;p&gt;Feel free to use the snippet above in your own applications whenever trying to diagnose memory leaks in the future. Happy debugging!&lt;/p&gt;

</description>
      <category>node</category>
      <category>javascript</category>
      <category>debugging</category>
      <category>memoryleaks</category>
    </item>
    <item>
      <title>Yet another attempt at FFI for Node.js</title>
      <dc:creator>Bryan English</dc:creator>
      <pubDate>Mon, 25 May 2020 14:26:07 +0000</pubDate>
      <link>https://dev.to/bengl/yet-another-attempt-at-ffi-for-node-js-4knp</link>
      <guid>https://dev.to/bengl/yet-another-attempt-at-ffi-for-node-js-4knp</guid>
      <description>&lt;p&gt;&lt;em&gt;(You can skip the long-winded origin story and head straight to the good stuff if you want.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Earlier this year, I was working on optimizing a data path inside a Node.js library that creates a bunch of data, encodes it to MessagePack, then sends it off to an HTTP server. I thought that maybe we could do some interesting things in native code that would be harder to do in JavaScript, like an optimized MessagePack encoder, and less-costly multithreading. Naturally, calling into native code from Node.js incurs some overhead on its own, so I was exploring some alternatives.&lt;/p&gt;

&lt;p&gt;At the same time, I had been reading about &lt;a href="https://kernel.dk/io_uring.pdf"&gt;&lt;code&gt;io_uring&lt;/code&gt;&lt;/a&gt;, a new feature in the Linux kernel that allows for certain system calls to be made by passing the arguments through a ring buffer in memory that's shared by the process and the kernel, for extra speed. This reminded me about how some features of Node.js are implemented by sharing a Buffer between the native and JavaScript code, through which data can be passed. This technique is much simpler than what &lt;code&gt;io_uring&lt;/code&gt; does, mostly because it's done for a single purpose on a single thread. The clearest example I can think of in the Node.js API that uses this is &lt;code&gt;fs.stat()&lt;/code&gt;, in which the results of the &lt;code&gt;uv_fs_stat()&lt;/code&gt; call are stored in a Buffer which is then read from the JavaScript side.&lt;/p&gt;

&lt;p&gt;The thought progression here was that this technique could be used to call native functions from JavaScipt in userland. For example, we could have a C function like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And then to call it, we could have a shared buffer which would effectively have the following struct inside it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;shared_buffer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;returnValue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;To call the function form JS, we first assign the values to &lt;code&gt;a&lt;/code&gt; and &lt;code&gt;b&lt;/code&gt; in our shared buffer. Then, we call the function and then read the value form the struct:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;jsAdd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;uint32buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nb"&gt;Uint32Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;uint32buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;uint32buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="c1"&gt;// This next bit is hand-wavey. I'll get to that in a bit!&lt;/span&gt;
  &lt;span class="nx"&gt;callNativeFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;uint32buf&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buffer&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;uint32buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;callNativeFunction&lt;/code&gt; would retreive the native function, then give it the arguments from the shared buffer, and put the return value back into the shared buffer.&lt;/p&gt;

&lt;p&gt;At this point, great! We've got a way of calling native functions that bypasses a lot of the marshalling that happens between JS and native code by just putting data directly into memory from JS, and then reading the return value right out of it.&lt;/p&gt;

&lt;p&gt;The detail here is that &lt;code&gt;callNativeFunction&lt;/code&gt; is not a trivial thing to do. You need to have a function pointer for the function you're going to call, and know its signature. Fortunately, we can handle all this because we're only creating this native addon for one function. Case closed.&lt;/p&gt;

&lt;h2&gt;
  
  
  But what about FFI?
&lt;/h2&gt;

&lt;p&gt;FFI (Foreign Function Interface) refers to the ability to call functions in native code (that is, from a low-level language like C or C++) from a higher level language, like JS, Ruby or Python. These languages all support some way of calling functions dynamically, without knowing function signatures at compile time, because there is no compile time. (Okay, that's not technically true with JIT compilers and all, but for these purposes we can consider them non-compiled.)&lt;/p&gt;

&lt;p&gt;C/C++ does not have a built-in way of dynamically determining how to call a function, and with what arguments, like JavaScript does. Instead, the complexities of dealing with calling functions, passing them arguments, grabbing their return values, and handling the stack accordingly are all dealt with by the compiler, using techniques specific to the platform. We call these techniques "calling conventions" and it turns out there are &lt;em&gt;tons&lt;/em&gt; of them.&lt;/p&gt;

&lt;p&gt;In Node.js the typical thing to do is ignore all this and just write a custom wrapper in C or C++ that calls the exact functions we want. While dealing with these things at compile time is the norm, there &lt;em&gt;are&lt;/em&gt; ways of handling them at run time. Libraries like &lt;a href="https://sourceware.org/libffi/"&gt;&lt;code&gt;libffi&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://dyncall.org/"&gt;&lt;code&gt;dyncall&lt;/code&gt;&lt;/a&gt; exist to fill this void. Each of these libraries provides an interface to deliver arguments to functions and extract their return values. They handle the differences between calling conventions on many platforms. These calls can be built up dynamically, even from a higher-level language, as long as you create reasonable interfaces between &lt;code&gt;libffi&lt;/code&gt; or &lt;code&gt;dyncall&lt;/code&gt; and the higher-level language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter &lt;code&gt;sbffi&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;The shared buffer technique didn't actually pan out for the code I was working on, because it turned out that converting the data into something readable by native code and &lt;em&gt;then&lt;/em&gt; into MessagePack was particularly costly. Moving operations to separate threads didn't really help.&lt;/p&gt;

&lt;p&gt;That being said, I still think the approach has value, and I'd like more folks to try it and see if it makes sense for their workloads, so I put together an FFI library for Node.js using the shared buffer technique to get and &lt;code&gt;dyncall&lt;/code&gt; to call the native functions dynamically. It's called &lt;a href="https://npm.im/sbffi"&gt;&lt;code&gt;sbffi&lt;/code&gt;&lt;/a&gt; and you can use it today as a simple way to call your already-compiled native libraries.&lt;/p&gt;

&lt;p&gt;Take our &lt;code&gt;add&lt;/code&gt; example from above:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="c1"&gt;// add.c&lt;/span&gt;
&lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;uint32_t&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now assume we've compiled to to a shared library called &lt;code&gt;libadd.so&lt;/code&gt;. We can make the &lt;code&gt;add&lt;/code&gt; function available to JavaScript with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// add.js&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;assert&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;assert&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getNativeFunction&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sbffi&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;add&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;getNativeFunction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/path/to/libadd.so&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Full path to the shared library.&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;add&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// The function provided by the library.&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;uint32_t&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// The return value type.&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;uint32_t&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;uint32_t&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c1"&gt;// The argument types.&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;assert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;strictEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;23&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;It turns out that while dynamically building up the function calls incurs some noticeable overhead, this approach is &lt;a href="https://github.com/bengl/sbffi#benchmarks"&gt;relatively quick&lt;/a&gt;. Of course, this test is for a very small function that does very little. Your mileage may vary, but it may be worth trying the shared buffer approach, either manually or with &lt;code&gt;sbffi&lt;/code&gt;, the next time you need to call into native code from Node.js.&lt;/p&gt;

</description>
      <category>node</category>
      <category>ffi</category>
      <category>javascript</category>
      <category>c</category>
    </item>
    <item>
      <title>Butts, and the Internet</title>
      <dc:creator>Bryan English</dc:creator>
      <pubDate>Wed, 23 Oct 2019 13:44:01 +0000</pubDate>
      <link>https://dev.to/bengl/butts-and-the-internet-3cgf</link>
      <guid>https://dev.to/bengl/butts-and-the-internet-3cgf</guid>
      <description>&lt;h2&gt;
  
  
  First... Butts
&lt;/h2&gt;

&lt;p&gt;Let's talk about butts. Specifically horse butts.&lt;/p&gt;

&lt;p&gt;There's a story that's been going around the Internet, probably since its inception, about how the dimensions of spaceship parts are indirectly derived from the width of a horse's butt. The short version is something like this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Roman chariots were typically pulled by two horses. Therefore chariots were about two horse butts wide. When the Romans built stone roads, they put ruts in to keep the carriages aligned. They used the existing widths of ruts as a guideline for the stone ruts, and then carriages throughout Europe were built for those new stone ruts. Fast forward to the industrial age and you find that European railroad builders used the ruts in the stone carriageways as a guide for how wide to build the rails (i.e. the gauge). This meant a whole bunch of trains in Europe being built approximately two horse butts wide. When America started building railroads, European engineers were brought over and used the same measurements. A final fast forward to the space age, and you've got rocket boosters for the Space Shuttle being transported by rail through rail tunnels, and so they need to be skinny enough to fit through those tunnels.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jXwuhZXa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://absfreepic.com/absolutely_free_photos/small_photos/railway-tunnel-3216x2144_29447.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jXwuhZXa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://absfreepic.com/absolutely_free_photos/small_photos/railway-tunnel-3216x2144_29447.jpg" alt="A typical two-horse-butt train tunnel."&gt;&lt;/a&gt;&lt;br&gt;A typical two-horse-butt train tunnel.
  &lt;/p&gt;

&lt;p&gt;There are some obvious problems with this. The United States didn't have a common track gauge until after the Civil War, and it was only chosen because it happened to be the only one used in the North. Even more glaring here is the fact that train tunnels are quite a bit wider than train tracks, and in fact were not an issue they had to design around for the shuttle rocket boosters. &lt;a href="https://www.snopes.com/fact-check/railroad-gauge-chariots/"&gt;Snopes does a great job of tearing this one down.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As it turns out, simple boring happenstance is the main reason these things seem to line up as they do.&lt;/p&gt;

&lt;h2&gt;
  
  
  CR+LF
&lt;/h2&gt;

&lt;p&gt;Way back in the 1960s, the ISO and ANSI (then called ASA) were in the process of standardizing character sets. Part of any set of printable characters is a way to indicate that text needs to appear on a new line. The two contenders were the &lt;code&gt;CR+LF&lt;/code&gt; two-byte combo, versus a single &lt;code&gt;LF&lt;/code&gt; on its own. In C-like languages, these are represented as &lt;code&gt;\r\n&lt;/code&gt; and &lt;code&gt;\n&lt;/code&gt; respectively. The ISO drafts suggested either &lt;code&gt;CR+LF&lt;/code&gt; or &lt;code&gt;LF&lt;/code&gt;.  The ASA draft only used &lt;code&gt;CR+LF&lt;/code&gt;. Either way, a two character sequence was supported by both standards in order to produce the new line effect.&lt;/p&gt;

&lt;p&gt;But why? Surely one character ought to do it. Indeed, in most use cases today, we only use &lt;code&gt;LF&lt;/code&gt;, so what was the need?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qG7bvJha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.gutenberg.org/files/53481/53481-h/images/p12.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qG7bvJha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.gutenberg.org/files/53481/53481-h/images/p12.jpg" alt="Teletype Model 33"&gt;&lt;/a&gt;&lt;br&gt;This thing is why we have CR+LF
  &lt;/p&gt;

&lt;p&gt;As it turns out, a lot of computing at the time was done using Teletype Model 33 ASR machines as terminals for input and output. These machines required both instructions in order to bring the printer head back to the start of the line, using &lt;code&gt;CR&lt;/code&gt; ("carriage return"), and down one line, using &lt;code&gt;LF&lt;/code&gt; ("line feed").&lt;/p&gt;

&lt;p&gt;We no longer use Teletype machines, and haven't for some time. That hasn't stopped the various twists and turns of history from keeping the &lt;code&gt;CR+LF&lt;/code&gt; alive, long after its original technical need had been obsoleted.&lt;/p&gt;

&lt;p&gt;When Unix arrived on the scene in the 1970s, it used the &lt;code&gt;LF&lt;/code&gt; character alone to denote a new line transitions in text files, taking the shorter option in the ISO specification. Despite this efficiency, later operating systems like MS-DOS and Windows preferred the &lt;code&gt;CR+LF&lt;/code&gt; line delimiter, adhering to both standards.&lt;/p&gt;

&lt;p&gt;In 1989, the earliest version of the World Wide Web was born, and with it, HTTP. Like any other text format, newlines needed to be represented. From HTTP/0.9 straight through to HTTP/1.1, the &lt;code&gt;CR+LF&lt;/code&gt; was used to denote the end of an HTTP message, and in the case of later versions to delimit headers. Part of the reason the two-character form of newlines was used was the differences between operating system text formats. HTTP/2 and HTTP/3 now use a compressed binary header format that does not make use of of the &lt;code&gt;CR+LF&lt;/code&gt; to delimit headers, but since &lt;a href="https://w3techs.com/technologies/details/ce-http2/all/all"&gt;only about 41% of websites use HTTP/2&lt;/a&gt;, and HTTP/3 isn't standardized yet, you're still likely using &lt;code&gt;CR+LF&lt;/code&gt; under the hood in 2019, regardless of which operating system you use.&lt;/p&gt;

&lt;p&gt;Much like with horse butts, there's a cool story of how some obsolete technology makes a surprise appearance in modern technology. Unlike the horse butts, we don't need to squint in order to see it right there in the technology we're using. In both cases, it's still a matter of boring happenstance that caused an old weird technical decision to last many years longer than it should.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>http</category>
      <category>web</category>
      <category>horses</category>
    </item>
  </channel>
</rss>
