<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: linkurious-dev</title>
    <description>The latest articles on DEV Community by linkurious-dev (@linkuriousdev).</description>
    <link>https://dev.to/linkuriousdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/linkuriousdev"/>
    <language>en</language>
    <item>
      <title>Choosing the right tools to test a visualization library
</title>
      <dc:creator>linkurious-dev</dc:creator>
      <pubDate>Wed, 24 Jun 2020 10:58:06 +0000</pubDate>
      <link>https://dev.to/linkuriousdev/choosing-the-right-tools-to-test-a-visualization-library-45po</link>
      <guid>https://dev.to/linkuriousdev/choosing-the-right-tools-to-test-a-visualization-library-45po</guid>
      <description>&lt;h2&gt;
  
  
  Stories from the trenches
&lt;/h2&gt;

&lt;p&gt;At Linkurious, we’ve designed Linkurious Enterprise, a platform that leverages the power of graphs and graph visualizations to help analysts and investigators around the globe fight financial crime.&lt;/p&gt;

&lt;p&gt;One of the main features of Linkurious Enterprise is a user-friendly graph visualization interface aimed at non-technical users.&lt;br&gt;
In 2015, unhappy with the state of JavaScript graph visualization libraries, we started developing our own: Ogma.&lt;/p&gt;

&lt;p&gt;Ogma is a Javascript library we built that is focused on network visualization: you may have seen networks visualized before in Javascript with other tools like D3.js or Sigma.js , but for us it was very important to enable some specific feature that were not available yet in other libraries, hence the creation of the Ogma visualization library from the ground up.&lt;/p&gt;
&lt;h1&gt;
  
  
  The problem
&lt;/h1&gt;

&lt;p&gt;As part of our journey while developing our graph visualization library, we encountered many challenges. One of these challenges is: &lt;strong&gt;what is the best way to test a visualization library?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Testing visualization libraries is important because the complexity of the codebase is very high: several algorithms, multiple renderers and a vast API surface makes it very hard to keep things simple enough to be managed without an automated system.&lt;br&gt;
Given this complexity, it is non-trivial to come up with a testing solution for all the many aspects of the visualization library, given the amount of things to test and the available resources for it.&lt;/p&gt;
&lt;h1&gt;
  
  
  Our solution
&lt;/h1&gt;

&lt;p&gt;We think that testing a library is not just about testing the library itself, but testing the whole experience of using it as a developer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;coding with the library itself&lt;/li&gt;
&lt;li&gt;reading the documentation&lt;/li&gt;
&lt;li&gt;using the working examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After several iterations, we ended up with a mix of approaches that we think works great:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;unit tests&lt;/li&gt;
&lt;li&gt;integration tests&lt;/li&gt;
&lt;li&gt;rendering tests&lt;/li&gt;
&lt;li&gt;examples testing&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Unit tests &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;How many times have you been told to write unit tests? So it’s not a surprise to hear that we’re using them I guess.&lt;/p&gt;

&lt;p&gt;The interesting thing is that we are not 100% focused on these: writing unit tests is pretty expensive as it requires to test in isolation each single bit of the library in multiple scenarios, with great use of time and resources.&lt;/p&gt;

&lt;p&gt;Because of that, we’re using a more pragmatic approach instead as &lt;a href="https://twitter.com/rauchg/status/807626710350839808"&gt;Guillermo Rauch puts it&lt;/a&gt;:&lt;/p&gt;


&lt;blockquote class="ltag__twitter-tweet"&gt;

  &lt;div class="ltag__twitter-tweet__main"&gt;
    &lt;div class="ltag__twitter-tweet__header"&gt;
      &lt;img class="ltag__twitter-tweet__profile-image" src="https://res.cloudinary.com/practicaldev/image/fetch/s--oVN8Phr3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://pbs.twimg.com/profile_images/871555682608136205/yMs8Gnot_normal.jpg" alt="Guillermo Rauch profile image"&gt;
      &lt;div class="ltag__twitter-tweet__full-name"&gt;
        Guillermo Rauch
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__username"&gt;
        @rauchg
      &lt;/div&gt;
      &lt;div class="ltag__twitter-tweet__twitter-logo"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--P4t6ys1m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://practicaldev-herokuapp-com.freetls.fastly.net/assets/twitter-f95605061196010f91e64806688390eb1a4dbc9e913682e043eb8b1e06ca484f.svg" alt="twitter logo"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__body"&gt;
      Write tests. Not too many. Mostly integration.
    &lt;/div&gt;
    &lt;div class="ltag__twitter-tweet__date"&gt;
      16:43 PM - 10 Dec 2016
    &lt;/div&gt;


    &lt;div class="ltag__twitter-tweet__actions"&gt;
      &lt;a href="https://twitter.com/intent/tweet?in_reply_to=807626710350839808" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-reply-action.svg" alt="Twitter reply action"&gt;
      &lt;/a&gt;
      &lt;a href="https://twitter.com/intent/retweet?tweet_id=807626710350839808" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-retweet-action.svg" alt="Twitter retweet action"&gt;
      &lt;/a&gt;
      185
      &lt;a href="https://twitter.com/intent/like?tweet_id=807626710350839808" class="ltag__twitter-tweet__actions__button"&gt;
        &lt;img src="/assets/twitter-like-action.svg" alt="Twitter like action"&gt;
      &lt;/a&gt;
      513
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  Integration tests &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Integration tests are not so different from unit tests, some people even consider them unit tests as well. The biggest difference from unit tests is that Integration tests are run against the external API of the library rather than to specific modules.&lt;/p&gt;

&lt;p&gt;This approach leads to testing a wider spectre of code and it is the closest thing from a library point of view, to check and keep control of bugs from the developer experience.&lt;/p&gt;

&lt;p&gt;Developers using the library are going to see only the API as a gateway to the state and behaviour for their goal: that is why we want to really stress as much as we can this side to catch both new bugs and regressions before the releasing process.&lt;/p&gt;

&lt;p&gt;For this reason the Ogma API is covered as much as possible and the coverage tool here is fundamental to see if all paths are reached before going into the rabbit hole of inner modules - where unit tests are kicking in for specific and edge case testing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mind cross-browser compatibility
&lt;/h3&gt;

&lt;p&gt;While Ogma can run also in a Node.js process, the fact that it is a visualization library makes it extra important that it works cross-browser flawlessly. That’s why all integration tests are run against a wide set of browsers and operating systems: from Internet Explorer 11 on Windows 7 to the latest Chrome version on MacOS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://mochajs.org/"&gt;mocha.js&lt;/a&gt; (in combination with &lt;a href="https://www.chaijs.com/"&gt;chai.js&lt;/a&gt;) for both NodeJS and cross-browser environments&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.browserstack.com/"&gt;Browserstack&lt;/a&gt; (or any alternative is good as well)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://istanbul.js.org/"&gt;nyc&lt;/a&gt; for code coverage tool&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Process:
&lt;/h3&gt;

&lt;p&gt;Also these tests are run for every commit on PRs on each repository within the CI for both NodeJS and cross-browser environments.&lt;/p&gt;

&lt;p&gt;A pre-push hook is in place to run it locally in NodeJS to prevent developers to break things while pushing it upstream.&lt;br&gt;
The &lt;code&gt;.only&lt;/code&gt; check is also in place here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rendering tests &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Once we check that all code is good it should be enough to release right? Well, &lt;em&gt;not yet&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The most important feature of a visualization library is its… rendering output.&lt;/p&gt;

&lt;p&gt;We can test rendering instructions sent to the rendering engine and validate that, but the final result is something that can not simply be spot from logs or code.&lt;/p&gt;

&lt;p&gt;That’s why rendering tests becomes important to support the QA of the library and reduce the amount of regression bugs on the library. Ogma provides three different rendering engines (SVG, Canvas and WebGL) and each browser has its own quirks for each that we need to spot before releasing a new version.&lt;/p&gt;

&lt;p&gt;In this context tools like puppeteer or selenium-like comes very handy to quickly put together visual regression tools: a test is a Web page with a network visualization with specific attributes which gets rendered and exported as image, then diff’d with some reference images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/puppeteer/puppeteer"&gt;puppeteer&lt;/a&gt; or Browserstack &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/arian/pngjs"&gt;pngjs&lt;/a&gt; and &lt;a href="https://github.com/mapbox/pixelmatch"&gt;pixelmatch&lt;/a&gt; for image diff&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Process:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;These tests are run at every commit on PRs on the CI. &lt;/li&gt;
&lt;li&gt;Because rendering engines may differ, tests and reference images are different for each platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Documentation examples testing &lt;a&gt;&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Here’s at the end of our post and covering the most undervalued side of testing: resources given to the developer, other than the library!&lt;/p&gt;

&lt;p&gt;Documentation is hard to test automatically, there are probably tools out there that are smart enough or leveraging some sort of NLP to verify documentation text, but in our case it’s too complex to handle it right now: a human reading and checking is still the best we can do for text, we can actually check the types and definitions, combining this with Typescript definition file.&lt;/p&gt;

&lt;p&gt;Often for the TS definition file, it is required to expose only those types necessary to interact with the API, so some internal types are stripped out of the definitions and an integrity check is performed on it to verify the consistency.&lt;/p&gt;

&lt;p&gt;Another side of the documentation are examples, often used by developers to better understand the features of the library: it is very important that these examples don’t break and that developers are able to run them locally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/puppeteer/puppeteer"&gt;puppeteer&lt;/a&gt; for examples checking (primitively checks for exceptions thrown)&lt;/li&gt;
&lt;li&gt;
&lt;a href="http://typescriptlang.org/"&gt;Typescript&lt;/a&gt; + &lt;a href="https://jsdoc.app/"&gt;JSDoc&lt;/a&gt; to generate the right signature types&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tsc&lt;/code&gt; for definition file integrity check&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Process:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Examples tests are run at every commit on PRs on the CI.&lt;/li&gt;
&lt;li&gt;Definition files are checked both on pre-commit hook and on the CI at each commit.&lt;/li&gt;
&lt;li&gt;The codebase has been recently ported from Javascript to Typescript so signature check is permanent during development - probably a theme for another blog post in the future ;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;p&gt;The different approaches of each type of testing helped to span across the whole developer experience, from the documentation to the effective library rendering results providing an effective tool to detect and handle breaking changes before they could reach the release script.&lt;/p&gt;

&lt;p&gt;Creating the right mix of tests has been a long effort of the team during the past years, which ended up consolidating recently with pre-hooks and a re-designed CI flow to provide quick feedback.&lt;/p&gt;

&lt;p&gt;The mix of different levels of testing helped us tame the complexity of the library, reducing the number of bugs that we receive, preventing the number of regressions and increasing the speed for enhancements, with the resources available in the company.&lt;/p&gt;

&lt;p&gt;We hope you’ve enjoyed the blog post and got some value out of it. If you have any question, suggestion or comment, please let us know. And remember we’re always looking for nice people who like to test, so let us know if you like to write code and test it!&lt;/p&gt;

</description>
      <category>visualisation</category>
      <category>testing</category>
      <category>webgl</category>
      <category>tests</category>
    </item>
    <item>
      <title>To WASM or not to WASM?</title>
      <dc:creator>linkurious-dev</dc:creator>
      <pubDate>Fri, 12 Jun 2020 08:06:34 +0000</pubDate>
      <link>https://dev.to/linkuriousdev/to-wasm-or-not-to-wasm-3803</link>
      <guid>https://dev.to/linkuriousdev/to-wasm-or-not-to-wasm-3803</guid>
      <description>&lt;h1&gt;
  
  
  A WASM benchmark story
&lt;/h1&gt;

&lt;p&gt;At &lt;a href="https://linkurio.us"&gt;Linkurious&lt;/a&gt;, we build Linkurious Enterprise, a Web platform that leverages the power of graphs and graph visualizations to help companies and governments around the globe fight financial crime.&lt;/p&gt;

&lt;p&gt;One of the main features of Linkurious Enterprise is a user-friendly graph visualization interface aimed at non-technical users.&lt;/p&gt;

&lt;p&gt;In 2015, unhappy with the state of JavaScript graph visualization libraries, we started developing our own: Ogma.&lt;/p&gt;

&lt;p&gt;Ogma is a JavaScript library we built that is focused on network visualization, providing excellent rendering and computing performances. You may have seen networks visualized before in JavaScript with other tools like D3.js or Sigma.js, but for us it was very important to enable some specific feature and improve specific performance metrics not available on other libraries, hence the creation of the Ogma visualization library from the ground up.&lt;/p&gt;

&lt;h1&gt;
  
  
  The problem
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Wdzaw1Pg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sfwi0q2i8mc969z4kcal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Wdzaw1Pg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sfwi0q2i8mc969z4kcal.png" alt="Galaxies of nodes"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ogma has been designed to work with state-of-the-art algorithms to provide the best performance in the field of network visualizations, from a first-class WebGL rendering engine, to the adoption of WebWorkers to improve the interactivity of the library on long running tasks and finally with top-class graph layout algorithms implementations.&lt;/p&gt;

&lt;p&gt;Since the first announcement, WebAssembly promised great performances – comparable to native ones – with very little effort from the developer himself other than developing the source code into a native performance language to get the best results on the Web.&lt;br&gt;
After some time and many more announcements on the WebAssembly side, we decided to give it a try and run a thorough benchmark before jumping on the (performant) WASM bandwagon.&lt;/p&gt;

&lt;p&gt;The perfect candidate for this kind of investigation are graph layouts: they are CPU intensive, crunching numbers over and over until a solution converges from it.&lt;br&gt;
The promise of WASM is exactly to solve this kind of problem with better memory and CPU efficiency at a lower level compared to the JavaScript interpreter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Our investigation
&lt;/h2&gt;

&lt;p&gt;Our investigation focused first on finding a candidate to benchmark a typical graph layout algorithm, which can be easily ported to different languages using similar structures.&lt;br&gt;
The choice landed on the n-body algorithm: this algorithm is often the baseline of many force-directed layout algorithms and the most expensive part in the layout pipeline. Solving this specific part of the pipeline would provide great value at the overall force-directed algorithms Ogma implements.&lt;/p&gt;

&lt;h1&gt;
  
  
  The benchmark
&lt;/h1&gt;

&lt;p&gt;As Max De Marzi said &lt;a href="https://maxdemarzi.com/2019/07/22/vendor-benchmarks/"&gt;on his blog last summer in 2019&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;There are lies, damned lies, and benchmarks.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Building a fair benchmark is often not possible because it’s hard to reproduce real world scenarios: creating the right environment for a complex system to perform is always incredible hard because it’s easy to control external factors in a laboratory benchmarking, while in the real life many things concur to the final “perceived” performance.&lt;/p&gt;

&lt;p&gt;In our case our benchmark will focus on a single, well defined, task: the n-body algorithm.&lt;br&gt;
It is a clear and well-known defined algorithm used to benchmark languages by &lt;a href="https://benchmarksgame-team.pages.debian.net/benchmarksgame/performance/nbody.html"&gt;reputable organizations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As any fair benchmark comparison, there are some rules we defined for the different languages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The code structure should be similar for the different implementations&lt;/li&gt;
&lt;li&gt;No multi-process, multi-thread concurrency allowed.&lt;/li&gt;
&lt;li&gt;No &lt;a href="https://en.wikipedia.org/wiki/SIMD"&gt;SIMD&lt;/a&gt; allowed&lt;/li&gt;
&lt;li&gt;Only stable versions of the compilers. No nightly, beta, alpha, pre-alpha versions allowed.&lt;/li&gt;
&lt;li&gt;Use only the latest versions of the compilers for each source language.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once defined the rules, it is possible to move to the algorithm implementation. But first, it’s necessary to decide which other languages will be used for the benchmark:&lt;/p&gt;

&lt;h2&gt;
  
  
  The JS competitors
&lt;/h2&gt;

&lt;p&gt;WASM is a compiled language, even if declared as “human readable” assembly code, it’s not a (mentally) sane choice for us to write plain WASM code. Therefore we ran a survey for the benchmark and we picked the following candidates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;C&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.rust-lang.org/"&gt;Rust&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.assemblyscript.org/"&gt;AssemblyScript&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The n-body algorithm has been implemented in the 3 languages above and tested against the JavaScript baseline implementation.&lt;/p&gt;

&lt;p&gt;On each implementation, we kept the number of points at 1000 and ran the algorithm with different numbers of iterations. For each run, we measured how long it took to perform the computations.&lt;/p&gt;

&lt;p&gt;The setup of the benchmark was the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NodeJS v. 12.9.1&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Chrome Version 79.0.3945.130 (Official Build) (64-bit)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;clang version 10.0.0 - C language version&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;emcc 1.39.6 - Emscripten gcc/clang-like replacement + linker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cargo 1.40.0&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;wasm-pack 0.8.1&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AssemblyScript v. 0.9.0&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;MacOS 10.15.2    &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Macbook Pro 2017 Retina&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Intel Dual Core i5 2,3 GHz, 8GB DDR3 with 256GB SSD&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Not top of the class machine for a benchmark, but we’re testing a WASM build which is going to be run in a browser context, which usually does not have access to all the cores and RAM memory anyway.&lt;/p&gt;

&lt;p&gt;To put some spice on the benchmark, we produced several versions of each implementation: a version where each Point in the n-body system has a 64 bit numeric coordinate representation, and another version with a 32 bit representation.&lt;/p&gt;

&lt;p&gt;Another note to consider is probably the “double” Rust implementation: originally in the benchmark a “raw” Rust “unsafe” implementation was written without using any particular toolchain for WASM. Later on, an additional “safe” Rust implementation was developed to leverage the “wasm-pack” toolchain, which promised easier JS integration and better memory management in WASM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Crunching the numbers
&lt;/h2&gt;

&lt;p&gt;To crunch the numbers, 2 main environments have been tested: Node.js and a browser environment (Chrome).&lt;br&gt;
Both benchmarks run in a “warm” scenario: the Garbage Collector has not been reset before each benchmark suite. From our experiments running the GC after each suite had no particular effects on the numbers.&lt;/p&gt;

&lt;p&gt;The AssemblyScript source was used to build the following artifact:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The JS baseline implementation&lt;/li&gt;
&lt;li&gt;The AssemblyScript WASM module&lt;/li&gt;
&lt;li&gt;The AssemblyScript asm.js module&lt;sup id="fnr-footnotes-1"&gt;&lt;a href="https://dev.tofn-footnotes-1"&gt;1&lt;/a&gt;&lt;/sup&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Crunching the numbers in Node.js shows the following scenario:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Kw0qZJGq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cw52vf9paoxepkf86jq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kw0qZJGq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/cw52vf9paoxepkf86jq5.png" alt="Node.js"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then run the same suite in the browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uA0EZsxF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jhra1h62z7u2jws71arm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uA0EZsxF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/jhra1h62z7u2jws71arm.png" alt="Chrome"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first thing we noted was how the AssemblyScript “asm.js” performs slower than other builds. This chart did not make it clear enough how well or bad other languages were doing compared to the JS implementation, so we created the following charts to clarify:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AkPx5LwL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7dyhzzszmjmfhsxwi2qy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AkPx5LwL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/7dyhzzszmjmfhsxwi2qy.png" alt="Difference from JS baseline 64-bit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3O7QaXKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0te67o60ucz7ostitcep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3O7QaXKX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0te67o60ucz7ostitcep.png" alt="Difference from JS baseline 32-bit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a distinction here between 32 and 64 bit, which may lead to the idea that JS numbers can have both representation: numbers in JS - our baseline - are always at 64 bits, but for the compilers to WASM it may make some difference.&lt;/p&gt;

&lt;p&gt;In particular, it makes a huge difference for the AssemblyScript asm.js build at 32 bit. The 32 bits build has a big performance drop compared to the JS baseline, and compared to the 64 bit build.&lt;/p&gt;

&lt;p&gt;It is hard to see how the other languages are performing compared to JS, as AssemblyScript is dominating the chart, therefore an extract of the charts was created without AssemblyScript:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Xf3FNiGn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rpi0u62hq2dupdym2py4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Xf3FNiGn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rpi0u62hq2dupdym2py4.png" alt="64-bit without AssemblyScript"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---rypN3ze--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/08s3yggw0ecsp5g2qj9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---rypN3ze--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/08s3yggw0ecsp5g2qj9m.png" alt="32-bit without AssemblyScript"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The different numeric representation seems to affect other languages too, but with different outcomes: C becomes slower when using 32 bit (float) numbers compared to the 64 bits (double), while Rust becomes consistently faster with 32 bit (f32) numbers than with 64 bit (f64) alternative.&lt;/p&gt;

&lt;h1&gt;
  
  
  Poisoned implementations?
&lt;/h1&gt;

&lt;p&gt;At this point one, question may come to mind: since all tested WASM builds are quite close to the JS implemented code, would it be possible that the native implementations are slower themselves and the WASM builds just mirror that?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x9gms3WF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i2nokhsaq9xvngxuvhe7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x9gms3WF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/i2nokhsaq9xvngxuvhe7.png" alt="Native performance"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Native versions of the implementations were always faster than its JS counterpart.&lt;/p&gt;

&lt;p&gt;What has been observed is that the WASM builds perform slower than their native counterpart, from a 20% up to a 50% performance penalty - performed on a reduced benchmark version with 1000 iterations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--seprZ7uZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/533a71zkvpmrr4jd03sh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--seprZ7uZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/533a71zkvpmrr4jd03sh.png" alt="C vs WASM C"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--y2NB9gk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/m8dw8awxbhqrr9lrhmod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--y2NB9gk8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/m8dw8awxbhqrr9lrhmod.png" alt="Rust vs WASM Rust"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PHHMMCKI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pjdh027ae6ec0vw4e80z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PHHMMCKI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pjdh027ae6ec0vw4e80z.png" alt="Rust wasmpack"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the measurements above, the native measures are counting also the bootstrap time, while on the WASM measurement that time has been taken out.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The performance gain we measured with Rust (both implementations) is up to 20% compared to the baseline JavaScript implementation - measured on average.&lt;br&gt;
This may sound like a win for Rust, but is actually a very small gain compared to the efforts required.&lt;br&gt;
What did we learn from that? We concluded that writing  JavaScript code carefully leads to high performance without the need to jump to new languages.&lt;/p&gt;

&lt;p&gt;Learning new languages is always a good thing, but it should be for the right reason: performances are many times the “wrong” reason, as they are more affected by whole design decisions rather than compiler or micro-benchmark optimizations.&lt;br&gt;
As a field experience, we did change language from JavaScript to TypeScript to write our own force-layout algorithm: what we improved was the quality of the codebase, rather than performances, which we measured during the porting and brought a marginal 5% gain, probably due a refactoring of the algorithm - we’ll cover that in a future blog post.&lt;/p&gt;

&lt;p&gt;If you’re interested in performances, and JavaScript, you may also find &lt;a href="https://www.dotconferences.com/2019/12/vladimir-agafonkin-algorithmic-performance-optimization-in-practice"&gt;this talk from the DotJS 2019 conference&lt;/a&gt; quite interesting, bringing similar results to ours.&lt;/p&gt;



&lt;h3&gt;
  
  
  Footnotes
&lt;/h3&gt;

&lt;p&gt;&lt;a id="fn-footnotes-1"&gt;1: &lt;/a&gt; Interesting to note how the “AssemblyScript asm.js module” was not actually fully asm.js compliant. We tried to add the “use asm” comment on top of the module, but the browser refused the optimization. Later on, we discovered how the binaryen compiler we used does not actually target full asm.js compliance, but rather some sort of efficient JS version of WASM. ↑&lt;/p&gt;

</description>
      <category>webassembly</category>
      <category>javascript</category>
      <category>algorithms</category>
      <category>performance</category>
    </item>
  </channel>
</rss>
